If you know me well, you know that I am not too keen on companies like Amazon tracking and recording every minute of our lives and now adding facial recognition software to that arsenal, wanting to become our bank, our doctor, our police and our insurance company, on top of selling us food, home security and anything else we can think of.
I am even more opposed to the lack of ethics displayed in the ruthless operation of Facebook (read this for details) and I am not too happy with Google and YouTube either.
Yesterday, however, while I attended a webinar on the use of AI in the medical practice, I had to admit that if you were able to track everyone’s whereabouts, you might be able to identify individuals at risk for having encountered the Covid-19 virus much quicker.
Covid-19 was also mentioned at the end of the webinar. (If you want to know more, you’ll need to get in touch with Jon Braun at Children’s Hospital Boston.)
This would need very rigorous legislation, to avoid stigmatization, for example. Don’t balk instantly. Yes, there are obvious downsides to the loss of privacy but there are also upsides. The problem with privacy issues lies in those doing the tracking and using our data having to be 100% transparent.
Openness – loss of privacy – also protects against abuse, but only if it’s 100% (not one-sided).
With regard to AI being unable to replace empathy, it can make up for a lack of empathy (stigmatization and ridicule) coming from certain health care professionals and it also will not molest or abuse you the way a handful of medical professionals have done. (There’s a current case in the UK of a doctor who not only molested some patients, but also told some they had cancer when they didn’t and amputated their breasts and, in other cases, deliberately left tissue behind that led to reoccurrence of breast cancer.)