Potential problems in AI adoption
Though artificial intelligence can be used to any company’s benefit, there are also some issues that you should consider. Here are some of the potential problems that should be addressed.
Data quality and quantity
The model is only as good as the data that feeds it. Artificial intelligence works best when given large amounts of high-quality data. AI systems learn from available information much like people do, but in order to learn about patterns, they need much more information to recognize features or understand concepts.
What’s more, you need to make the data right. If you only use publicly available information, chances are your competitors use the same information, so it won’t give you an advantage. In some industries, it’s also possible that sufficient information isn’t available. But if you do have the data to train a model, you really have to make sure it’s high-quality. The data sets must be representative and balanced or the system will adopt bias from these data sets.
The legal system fails to keep up with modern-day technology and with the appropriate laws not in place, artificial intelligence may sometimes be difficult to manage. One of the most common concerns is the liability in case of an AI system causing damage. Who’s responsible then? Clearly, it’s the system’s fault, but we need a human to take responsibility for it. If an autonomous vehicle causes an accident, who do we blame? The person who ordered making it? The developer who programmed it? There are no answers now.
While AI-caused accidents may not be worrisome to some business owners who don’t deploy AI to an extent allowing for such incidents, there’s a more down to earth matter: GDPR. Before GDPR, tech giants such as Google or Facebook collected enormous amounts of data every day. That’s great for AI systems that have up-to-date information served to them all the time. But with GDPR, data is treated not just like a big sack of otherwise useless pieces, it has become a commodity that has to be handled with care. Google and Facebook had to alter their data collection methods. As AI systems will not adapt themselves, it required some development work to alter the way data is collected.
Data collection in the era of GDPR is one thing but there’s another twist to it. Under a very strict understanding of the GDPR, the user is allowed to demand explanation on how their data is processed, so they may ask Netflix why they were given a particular recommendation. Which leads to the next point:
No explanation behind the decisions
Many models are black boxes – they deliver predictions but without giving you insight into the processing, so you know the decision but you don’t know how it was made. Why did the system decide this? Well, it analyzed the provided data and got to this decision, but you won’t know any details. And if Netflix has to explain, in detail, how it got to the recommendation of this film, it may be a trouble. There are ways to track the process but even then: they’ve spent crazy amounts of money and time developing their secret recommendation system, and now, there you go, here are the details.
Right now it’s difficult to understand how decisions in multi-layer neural networks are made, it’s not linear maths, so justifying the predictions can also be difficult. However, there are some approaches that aim to increase model transparency, such as LIME – local interpretable model-agnostic explanations. If the system is asked to provide a prediction to help make a decision, sometimes the explanation is simply necessary. Let’s say a doctor is given predictions on what the patient is suffering from. The doctor cannot simply rely on a prediction that says “flu”. How is it flu? What are the factors that led to this diagnosis? If the users are given the rationale behind the decision, it will be easier for them to assess when to trust the model.