After several months filled with big announcements, exciting launches, and speculations around what comes next in the generative AI world, it’s safe to say that there’s hardly any doubt left that this technology has all the potential to transform various industries — and various business areas within.
From content creation to process optimization, its capabilities are vast and promising. The tech world is blooming with more and more examples of inspiring Gen AI projects, and AI innovators are working their tails off developing new generative AI models or advancing the already existing ones.
However, as with any new tech, the key lies in understanding how to harness its potential effectively. And in this particular case, identifying the right use case plays a huge role.
That’s why, in this article, we delve into the insights of seasoned experts who share their experiences and shed some light on this matter.
So, if you’re interested in or preparing for Gen AI adoption, grab yourself a cup of coffee and read on!
Identifying the right use case for generative AI adoption
How would you identify potential use cases for a generative AI project?
Alain Bindels, Technology and Innovation Leader at Roche
Alain’s interest in generative AI focuses on its potential to revolutionize the healthcare industry — from drug discovery to personalized medicine. He also advises businesses on utilizing Gen AI potential for generating novel solutions, creating more efficient business models, and speeding up the process of creating new products.
I would look for a use case with the biggest impact on our customers and with low barriers/complexity, e.g., one allowing us to save time on preparing customer-facing materials that still should be reviewed by experts, but Gen AI would reduce the time to prepare those materials. So clearly defining the scope and benefits that need to be obtained to make it a useful case study both for the client and the provider of generative AI. But reducing the exposure risk as much as possible so that in the worst case, if it goes wrong, no negative impact can result from it.
Ranjan Roy, VP of Strategy at Adore Me
After successfully integrating AI-driven solutions into Adore Me — which let the company save up to 30 hours a week per writer on copywriting tasks while improving SEO performance by 40% — Ranjan advises other companies on how to revolutionize their content marketing operations with generative AI.
We always start by asking, “is the work repetitive, predictable, and routine?” It’s not worth tackling unpredictable or difficult-to-train challenges at this stage.
We have avoided incorporating ROI too heavily into discussions on what we pursue as the outcomes remain too unpredictable. We focus on what are the areas in which we see long-term potential and whether those intersect with genuine interest within the team. That’s where we start.
Matt Kurleto, founder of Neoteric
Matt has been experimenting with AI since 2008, leading commercial implementations since 2017, with experience across different industries such as telecom, healthcare, and education. Founder of Neoteric and 3 VC-backed startups, he is also responsible for the TechSeed strategy and acceleration program.
There are two approaches to selecting GenAI projects.
The first is top-down. We are mapping out starting from the strategy and its measurable goals down to processes and tasks that are directly or indirectly impacting it. Based on that, we are considering how Generative AI can improve selected processes or tasks.
Suppose you are a pharmaceutical company whose strategy revolves around operational excellence. In that case, you may consider using generative AI to improve the efficiency of creating clinical trial protocols or finding new molecules worth looking into.
If you are a financial company aiming at acquiring more customers, you can use generative AI to create more personalized content to attract customers.
The other approach is bottom-up. We first ask ourselves what tasks or processes around generating content we would like to delegate. Prompting GPT models is, in essence, delegating a task. To identify the right use cases, we can ask ourselves questions like “What tasks could I delegate to AI?”, “Which of my team’s tasks can be performed by Gen AI?”, “Which tasks are taking a lot of time?”, “Which tasks are repetitive?”.
Suppose you work at a pharmaceutical company, and your team spends months generating leaflets and medical information for patients or doctors. In such a case, you might want to use generative AI to shorten the process.
If you are working at a bank and it takes hours for your customers to get on a support line and understand what the specific regulations in your contract are, you might want to create a Q&A bot with generative AI to shorten the response time.
What role do domain experts and stakeholders play in identifying the right use case for a generative AI project?
A very important one — from a stakeholder management and buy-in point of view. As part of good change management practices, involving impacted and concerned parties early on is critical — to avoid unpleasant surprises or leaving someone out. The experts can identify essential pain points and opportunity areas where generative AI may be worth applying.
Just as in any other project, also in this case, it’s important that experts educate other stakeholders on how to evaluate the feasibility of projects. We’ve found that even non-experts can very quickly understand the basics of “what quality outcome I can expect using generative AI.”
The most impactful and valuable innovations happen when different domains cross paths. For that to happen — meaning, in this case, to choose the right use case wisely — we need input from various stakeholders.
Although the most accessible use cases are being widely discussed — like using generative AI to create marketing content or improve customer service, the most promising applications are within R&D in the industries like healthcare, pharma, or finance.
To build an effective solution in one of those spaces, you need deep domain knowledge and experience.
Learn from the hands-on experience of Caju AI!
Generative AI use cases assessment
Assuming that we identified a few potential use cases for a generative AI project. How to assess their feasibility?
Assess the potential impact, the requirements needed to realize it, and the time needed to implement it. Assess the risk and the readiness of the organization, people, and stakeholders, as well as the management buy-in. Create a checklist for feasibility that can be easily assessed to ensure successful outcomes of the case study.
At this stage, I strongly believe traditional risk-reward project pursuit calculations should not be the driving force in where you choose to invest. So much of the space remains around learning that I’d encourage organizations to focus more on where they see the most long-term potential outside of an individual project.
When assessing the feasibility of a potential use case, it’s worth taking into account the following aspects.
First, where is the market moving, and what will it look like when we fully implement the project? It’s hard to predict the future, but with the current pace of change, it’s important to consider it — so we don’t end up with something useless because of technological advancements or market changes.
Second: the data. What data do you have that is proprietary? What data do you need from outside? How will you feed the models with the data?
Third: compliance. In this context, we have to consider regulatory, internal policies, security, data privacy, biases, hallucinations, and intellectual property. It’s often the most challenging part of the assessment process, as there are many unknowns when it comes to generative AI.
Can you share an example of a successful generative AI project? How were these use cases identified?
Not yet, but one potential use case identified is the creation of educational materials for healthcare professionals, which are important customers and prescribers for pharma. A lot of time, money, and resources are wasted on creating this content, so passing the creation of these materials to generative AI can be a significant positive.
The first that comes to my mind is ChatGPT. The use case was simple — provide a low-entry user experience enabling an average user to interact with GPT models to exhibit its potential.
We automated our e-commerce product descriptions — and it was the perfect intersection of a task the copywriter hated doing as it was repetitive and monotonous. That’s what also made it a perfect candidate for generative AI.
See how we shortened the bot’s time of response with Pinecone, Langchain, and Embeddings.
Metrics & (safety) measures
How do you measure the success of a generative AI project, and what metrics do you use?
It’s important to stipulate from the start the objective ms, outcome ms, and goals you want to achieve. I would therefore create clear metrics, e.g., time saved, ease of use, business impact, customer/market feedback (if it involves content created for customers), and feedback from stakeholders and end users of the generative AI outputs.
Time saved relative to traditional processes is always our favorite KPI. We’ve also begun evaluating automated processes versus manual ones in terms of efficacy, but I feel that a proper understanding of what and how to measure still has work to do.
First and foremost, we use the metrics that show the progression toward the selected problem. If we were working on customer support — we’d measure time to solve the problem, customer satisfaction, retention, and upsells. If we were working on reducing the time of getting drugs to market by using generative AI in creating clinical trial protocols, we’d measure the mentioned time, the human assistance needed, and how it impacted the whole process.
Now, let’s think of a different scenario: when generative AI fails. What are some common pitfalls when identifying a use case for a generative AI project, and how can they be avoided?
If the people, including management, are not ready, if the required data is not readily available and complications arrive along the way, if the negative impact is too high — meaning the risks are too high — and there’s too much at stake in case of an unsuccessful outcome.
First and foremost — doing something that will not drive business value. Lots of PoCs that I’m seeing are “wannabe projects.” We have this idea, so let’s build it. With no consideration of the strategy or the jobs to be done, it’s unlikely for such a project to succeed. My way of going around it is brainstorming those ideas considering their value drive and chances for achieving it — to identify low-hanging fruits.
Second, compliance. There’s a lot of misunderstanding when it comes to how the models are using the data. A common mistake is that you need to train the model with your proprietary data, which would enhance the model itself and break privacy or IP. Actually, the models can use the proprietary data to build context rather than train the model, which takes that risk out of the equation.
Third, relying only on your own data. There’s a lot of domain-specific or use-case-specific data that you can source outside of your organization. For example, biometric data that can be used both for training and validating the models should have the consent of the people providing it. There are platforms like vAIsual that provide you with the right datasets.
The most important — understand what your technical limitations are. Do you have clean datasets that can be leveraged? Do you have teammates willing to move past the initial letdowns that every generative AI effort will inevitably encounter?
Other considerations regarding generative AI adoption
One more thing. How to balance the potential impact of a generative AI project with ethical considerations?
Ensure the expected outcome is clear and run a risk and ethical assessment. Be clear and transparent about potential ethical considerations and discuss them openly so there are no surprises at the end or during the process.
We are dealing with this extensively as a fashion-tech company; if we train a Stable Diffusion model using our existing assets, even ones we have rights to in perpetuity, is that okay? Are there expectations around what we disclose to consumers? We’ve been very cautious by focusing on areas that are very low-risk from an ethical standpoint to start.
The first thing that comes to my mind is considering biases and hallucinations. Biases are mechanisms within the model that make them express false information because of the unfair model or datasets. For example, some early versions of picture generators would change your skin color to white when you asked to make a person in the picture more attractive. To avoid that, we need to make sure that the reward mechanism in training models is aware of the biases but also that we filter the answers in search of possible breaches. The last one is crucial. For example, as a pharmaceutical company, you probably wouldn’t like your Q&A bot to give answers to questions like “which meds can be used to commit suicide.”
And what about preparing the organizational culture for generative AI adoption? How to do it right?
By explaining the basics of generative AI and the possible benefits of using such technologies and bringing examples and case studies relevant to the audience. Also, by providing some interactive Q&A and breakout sessions where the audience can come up with new case studies that can be applied in their scope of work and have leadership buy-in. All that to give them an explanation of why it’s important to adopt generative AI. It’s essential to give the team the “why,” the “what,” the “how,” and the perspective of Gen AI’s positive impact on people’s work and efficiency, so they should not be afraid of the technology but rather see it as a useful tool in their toolbox to solve business problems.
We have a twofold approach to this. First, we focus on finding problems or workflows our teammates don’t enjoy working on. It’s a lot easier to convince people to dive into new technologies when it makes their life easier. Then we have also been diffusing the fun side of generative AI through “Prompt-Offs” with chat tools, Midjourney workshops, and more — to make people excited about trying new tools.
MIT teaches that people are open to innovation when they have psychological safety. To get to that, you need to nearly over-communicate your team with how the new technology will impact their daily routine, goals, and responsibilities.
A great exercise for building psychological safety is to have a question burst. You pair people who will be impacted by the change and tell them to ask 10 tough questions in 90 seconds that the other person will write down without answering. According to MIT studies, just hearing and writing down those questions improve psychological safety.
The other aspects of culture come with your internal policies. What an employee can or cannot use? Who is responsible for the outcomes of the model? Who can decide on the project as it goes?
A single decision point within known constraints is one of the most essential success factors.
Do you see any new trends that can impact how we identify and select use cases for generative AI projects?
I would focus on commercial functions and potential use cases where lots of time is wasted, so generative AI’s positive impact and benefits can be clearly seen. The uprising of automation and productivity that are required in many industries due to the recession and situation we are in, can be a good burning platform to save time and money and have a faster go-to-market strategy or customer impact compared to other competitors in the industry.
We’ve built many of our tools in-house, simply as a function of limited, tailored solutions on the market. We’ve watched the entire industry evolve in terms of the software available just in the past year, and we strongly encourage others to match use cases against what tools are available in the market.
I wouldn’t say it’s a trend yet, but I see generative AI as a potential opener for new business models. One that comes to my mind is pay-by-data. Since a lot of data is protected by privacy regulations, I can imagine a certain person or a company being able to pay for using some kind of software in exchange for allowing the provider to use their data to develop the solution.
How to identify the generative AI use case that will help you succeed? — summary
Let’s shortly sum up what we learned from Alain, Ranjan, and Matt:
- When identifying potential use cases for generative AI projects, consider their: impact, feasibility, repetitiveness, alignment with strategic goals, and delegation potential.
- Involve domain experts and stakeholders for buy-in, feasibility evaluation, and cross-domain collaboration.
- A comprehensive evaluation process is necessary to determine whether a use case is viable and aligns with organizational goals. Consider the impact of the analyzed case, requirements, time, risk, long-term potential, market trends, data availability, and compliance challenges.
- When thinking about measuring the impact of generative AI adoption, define metrics that align with project objectives, prioritize time savings and efficiency, and tailor metrics to showcase progression towards the selected problem. Use metrics that are relevant, measurable, and adaptable to the specific context of your generative AI.
- Think about the potential pitfalls and how to avoid them before they occur.
- Stay up to date because to say that the generative AI space is evolving fast is to say nothing.