In this comprehensive guide, you will learn what artificial intelligence is – and how it’s different from machine learning, why sci-fi movies are not to be trusted in terms of the perception of AI, and how to make sure AI adoption of your company starts right and follows through.
Artificial intelligence is a technology that allows computers to learn and draw conclusions. Through recognizing patterns in data, AI can help us solve various business problems, boosting efficiency, improving operations, and, ultimately, increasing a company’s revenue.
Though popular culture likes to touch upon the topic of artificial general intelligence (AGI), the scenario of human-like machines living just next to us is still sci-fi. We do, however, see AI-powered solutions that outperform people at specific tasks – that’s narrow AI. Today, artificial intelligence is an augmentation of the human workforce, and joining forces provides for the best efficiency of work.
The terms “artificial intelligence” and “machine learning” are often used interchangeably, but in reality, they’re not the same thing. So what is the difference between AI and ML? Let’s start with what AI is. There are two fundamental groups within this field: applied AI and generalized AI:
Machine learning (ML) is a way of achieving AI: all the techniques and processes that bring machines closer to ‘understanding’ human cognition and behavior are broadly categorized under this name. ML is what allows computers to learn, being provided with relevant data that can be analyzed to draw conclusions.
Within machine learning, there are two main methods used: supervised learning and unsupervised learning.
Supervised learning, as the name suggests, requires supervision over the process. It is like having a teacher that trains the algorithm to do something. In supervised learning, we use labeled data to train the model – which means that some data already contains the answer to a question.
Using a very simple example: we’re trying to teach the model to tell a plum apart from an orange. We’ll show the model that an item that is round, bigger, orange, and has thick, pitted peel is an orange, while a more elliptical, purple or reddish, smaller, and smooth-peeled item is a plum. When we’ve taught the model what’s what, we test it with non-labeled data and let it classify fruit on its own.
Unsupervised learning, in turn, means that the model is trained with the use of data that is neither classified nor labeled. In such a scenario, we allow the model to process the information without any guidance, so the model has to discover the characteristics of items.
Going back to plums and oranges, in unsupervised learning, the model would not be able to classify something as “a plum” or “an orange” because it doesn’t know what plums and oranges are. However, it will be able to identify similarities in items and group them accordingly, e.g. based on color.
Both these methods have their pros and cons and can be used for different purposes, like regression, classification (supervised learning), or clustering (unsupervised learning). In a concise summary, supervised learning utilized labeled data, is a simpler method, and tends to be accurate, while unsupervised learning is more computationally complex and can be less accurate.
Just like ML is a part of AI, deep learning is a subcategory of machine learning. Deep learning was inspired by the structure and functioning of a human brain and requires artificial neural networks (ANN) that consist of many layers. Deep learning is supposed to make sense of voluminous sets of data, but it ‘’learns’’ in a different way. In deep learning, the model learns unsupervised from data that is unstructured or uncategorized.
Deep learning is commonly used in detecting objects, recognizing speech, translating languages, and decision-making.
PwC Global estimates that by 2030, the potential contribution to the economy from AI will be 15.7 trillion dollars and the global GDP could be up to 14% higher as a result of AI. Artificial intelligence is undoubtedly full of potential for many industries and has gained much popularity thanks to driving tangible business results, including streamlining processes, improving customer experience, boosting sales. Artificial intelligence has a variety of applications – and just as many benefits. Some include:
Data is called “the new oil” since it’s an essential element of the digital economy. However, even big data brings no value if it’s not used right. Artificial intelligence can help businesses data-mine processing billions of data points in an instant. AI can provide accurate predictions about future outcomes based on historical data – it converts information into knowledge.
AI-driven predictions are useful in the retail industry to battle customer churn or adjust pricing. In banking, AI is used to predict currency and stock price fluctuations. In healthcare, it’s used to predict e.g. hypoglycemic events or outbreaks of infections.
Artificial intelligence can be at the heart of a product but in many cases, it adds intelligence to existing products. Voice search, chatbots, product recommendations are only some examples of AI that all consumers use regularly. These solutions are implemented to improve products, e.g. online shops, e-learning platforms, or online banking.
AI can achieve amazing results in terms of accuracy. Models are not as prone to errors as humans are and are clearly better at handling big data. People don’t have to spend endless hours scrolling through excel cells to find information and draw conclusions – AI can do that efficiently and accurately, while people can utilize the produced insights to improve their work. Of course, AI is not error-free, but accuracy at a level of 99% is no big deal.
Artificial intelligence can be used to generate more revenue, but also to save money. When processes are optimized and sped up, the time that’s saved translates into money savings. AI can also be used in places where money leaks away from your company – like customer churn. Models can identify customers on the verge of leaving so your retention team can be proactive and prevent them from doing so.
Reducing churn is important because it’s an efficient way of boosting revenue. It is 5 times cheaper to keep a customer than it is to acquire a new one. What’s more, the probability of upselling to an existing customer is 65%, while the chances of selling to a new prospect are only 13%.
A study conducted by Zendesk shows that 42% of B2C customers purchase more after a good customer service experience, while bad customer service interactions result in 52% of customers not buying. That’s a clear reason to make sure your CX is not just good but great.
This can be achieved by many means including 24/7 availability (e.g. with conversational AI) or providing more personalization. 59% of shoppers who experienced personalization think that it has a big influence on their purchase decisions. In the past, personalization was a luxury served to those that could afford shopping in high-end boutiques, but today, with recommender systems, it’s something we’ve grown used to. And it’s a clear profit: for example, Amazon saw a 29% increase in sales after implementing recommendations.
What’s the value that AI can bring to given industries? McKinsey estimates that the potential total annual value of AI and analytics across industries is around 9.5 to 15.4 trillion dollars. The value that you can drive from AI solutions depends on various factors, including the industry, use case, or digital maturity of the organization.
However, it’s clear that the value is there. Whether you use a recommendation engine, predictive analytics, dynamic pricing, scoring, natural language processing, or other machine learning models, it should always only be a means to achieve business objectives. Then it can bring real value to your company. See example use cases of AI in these industries:
Artificial intelligence adoption is a process that requires a strategic approach. Even though AI promises various benefits, some sources state that 80% of AI implementations fail to deliver. AI projects are about research and development, so they can be risky. Failing fast is a part of R&D, but a more serious problem arises when a project takes months to develop and consumes a significant amount of money. And that can happen, too.
To make your AI adoption successful, you should start with small steps to validate the idea, and only move on when AI has proven to be of use. So how do you start?
Artificial intelligence development doesn’t start with coding. The first step revolves around your business: the needs, pains, requirements, goals. At the beginning, it’s important to know what process (or processes) can be improved with the machine learning model, define the long-term goal along with appropriate metrics.
With the business background discussed and goals established, it’s time to select models that will solve your business problems and help you achieve your objectives. At this point, you don’t have to decide what exact model you want to develop – rather list all possible solutions to be able to analyze and test them.
The model can only be as good as the data it’s fed with. Or in other words: garbage in, garbage out, so it really matters what your model is given. All artificial intelligence solutions rely on data, usually the more, the better. You should know what data you collect and what data you need for the use case. Ideally, you should be able to use the data you have to achieve your objectives, but in some cases, there might be a need for your organization to collect data or obtain it e.g. from third-party providers.
Consider the skills that will be required to deliver your project. Do you have all you need in-house? Do you want to train your staff? Are you planning to hire an external data science team? Both hiring in-house and outsourcing have their pros and cons, so think about what’s best for your business.
What’s more, you should also keep in mind the staff that will be working with the solution once deployed – they might need training or support to become more data-driven.
When you’ve considered the objectives, KPIs, business use case, data, and skill set, it’s time to map out the activities and start working on your AI. Start small – test the models to verify that they can work, and see which one(s) will be best. When you implement AI step by step, you mitigate the risk of failure and potential negative impact on your business.
These small steps are just the beginning of the way, and they help organizations begin strategically to avoid common issues related to AI adoption. One significant factor that helps companies implement AI smarter is a data strategy.
A data strategy ensures that the data collected by the company is actually managed like an asset. Data strategy includes:
Each data strategy may be different and consist of various elements, adjusted to the organization’s needs. Common elements include business case, objectives and quick wins, data requirements, skills and know-how, core activities, KPIs and metrics, and data-driven culture. These are all the things you should plan to make sure that you’re well-prepared for the adoption.
All the elements allow you to be in control of the project and make sure that everyone involved in the project is aware of the goals, activities, and expected results.
Research shows that there are various reasons why AI projects fail: including the lack of proper skills, limited understanding of the tech within the company, budget limitations, and so on. Clearly, the implementation of artificial intelligence can be a little tricky, but luckily many of the common challenges can be avoided. If you are aware of the issues that can occur along the way, you can prevent them from happening or deal with them more efficiently even if they happen.
You can even fail with AI before you start. This happens when companies jump in before having all the necessary resources — the data, the budget, the team, and the strategy. Without these elements, it’s only wishful thinking.
Starting without that strategy is difficult and risky. The first AI project should not be a company-wide AI implementation but a proof of concept that gets the entire organization accustomed to the new normal.
With time, both AI and your company will grow: your systems will be getting better and better, and your team will be more data-driven and efficient. It can be a win for all, if only you do it step by step and not lose sight of your objectives. However, even if you’re off to a good start, you can come across some challenges.
Company culture not recognizing needs for AI and difficulties in identifying business use cases are among the top barriers to AI implementation, according to O’Reilly. Identifying AI business cases requires the managers to have a deep understanding of AI technologies, their possibilities and limitations. The lack of AI know-how may hinder adoption in many organizations.
“It’s not the best algorithm that wins, it’s who has the most data” – so they say. That’s true: just a “good” algorithm is never enough. Success depends on a number of factors, and even if you select an awesome algorithm, it may not be the right one. You need to define the problem first, consider the required data and features. Again: AI doesn’t start with development, but with a thorough analysis of your organization’s goals, pain points, and its current state.
As mentioned above, the quality of the system relies heavily on the data that’s fed into it. AI systems learn from available information, but they need quite big data sets. How big is an open question – that really depends on the individual use case. Generally: the more, the better.
To start an AI project, you need to know what data you already have and compare that to what data the model requires. When you know what you already have, you’ll see if anything is missing. The missing part is data that would be beneficial, but which you don’t have at the moment. You can obtain data from third parties or use publicly available information if it’s relevant. There are various ways to extend your data set if necessary.
AI implementation requires the management to have a deeper understanding of current AI technologies, their possibilities and limitations. Unfortunately, we’re surrounded by a plethora of myths concerning artificial intelligence, ranging from mundane things like the need of hiring an in-house data science team to sci-fi fantasies about smart robots ending humanity. The lack of AI know-how hinders AI adoption in many fields.
Your staff will have to learn how to work with AI solutions and how to use the insights in their everyday work. You need to make sure it becomes their habit to make data-driven decisions. Data-driven organizations have processes that enable employees to acquire the information they need, but they also have clear rules on data access and governance.
What is a model without metrics? You can’t even build one if you don’t know how to measure its performance. And accuracy is not the only thing that matters.
Think about it this way: if you want the model to predict terrorist events at an airport. Out of all the visitors at the airport, 99.99% will not be terrorists. When the model is trained on such a data set, it may learn to label all visitors as non-terrorists, and its accuracy will remain at 99.99%. But is that what we really expect the model to do? Whatever the business case, whatever the model – the metrics should be selected individually to fit the organization’s needs.
Research shows that companies focusing on human and machine collaboration create outcomes that are two to more than six times better than those of organizations focusing on machine or human alone. BMW found that teams made up of robots and people were about 85% more productive than the old assembly line. This shows that even though machines outperform people at certain tasks, their main goal is not to replace human workforce, but to augment it. In this new era of disruptive innovation, it is important to find the balance between work as we know it and the new tech – to help the people who create your organization collaborate with technology and boost their everyday work.