We don’t trust what we don’t understand
Artificial intelligence is a complex concept and is still often misunderstood. While various materials: magazines, books, videos, online courses, offer great educational value on what AI is, how it works, and what limitations it has, many popular media outlets and movies still use the “scary robot” theme to make people worry about basically every aspect of their lives. Losing our jobs, having to let machines decide about important aspects of our lives (getting a mortgage, the way of treating a disease, qualification for medical procedures, sentences in legal cases), or, in a sci-fi scenario, being ruled by the superintelligent machines.
Looking at the general understanding of AI as a technology, it’s clear that education is crucial to help people gain more knowledge of the subject and get rid of the abstract, futuristic fears. That doesn’t mean the fears will stop being there, and it also doesn’t mean that AI can’t be used for bad reasons – however, it’s a technology, and as such it’s a tool. It’s not conscious, it’s not evil, it has no opinions and intentions.
There is another important element in terms of understanding AI: the black box issue. In many cases, it’s enough for us to give the model some data, we ask for some results, and we get them. Voilà. We don’t need to follow the decision-making process inside the model, it’s enough for us to know that it’s done based on the data, and we can later verify how accurate the model’s results are. However, there are cases when we can’t just trust the model.
Imagine you go to see a doctor, the doctor takes down your symptoms, examines you, and runs your medical results through an AI system. It’s just a cold, that’s what the doctor said, and AI confirmed it. So far, so good. But the doctor also uses AI to suggest treatment methods, and AI wants to have you hospitalized and receiving antibiotics. Huh. You don’t feel that bad, it’s just a cold, you really don’t want to go to the hospital and hang out with people who are much more seriously ill. No need to catch another disease. And antibiotics, really? It’s just a sore throat and runny nose!
Artificial intelligence solutions have proven to be an effective diagnostic aid, but unfortunately, they can’t handle everything just as well. In many cases, AI’s decisions on how to treat a patient will be correct, but if it’s wrong, it can influence the patient’s well-being. Think about bad decisions for the treatment of cancer, epilepsy, or other serious diseases. Or judicial decisions: who goes to jail, who can be let out with bail, how severe the sentence will be. Or even banking: will you get a loan? Whatever the decision, you should be able to know WHY. When there’s no explanation as to why a certain decision was made, and it doesn’t align with what we think is right, the distrust builds up. However, there are approaches that aim to increase model transparency, such as LIME (local interpretable model-agnostic explanations), which I described in the article 12 challenges of AI adoption.