Dive into the second issue of Neoteric AI News Digest, where we unravel the recent developments in AI that are defining the industry’s future. We’ve curated the most impactful stories, ensuring you get the insights that matter without the digital clutter!
What’s been unfolding recently? Well, the hottest of them all is undoubtedly the release of GPT-4o — the new version of OpenAI’s flagship generative AI model. But that’s not the only notable news from the AI world! The FBI has issued a warning about the rise of AI-driven cyber threats, and OpenAI has launched a new tool to detect images generated by DALL-E 3.
Meanwhile, Apple is on the brink of a significant deal with OpenAI to integrate ChatGPT into iOS 18, and the UK’s AI Safety Institute unveils “Inspect,” a new toolset aimed at enhancing the safety of AI models. And last but not least, WIRED magazine raised the topic of an interesting shift in the AI startup landscape, where once-buzzy generative AI products are now finding a more grounded future in enterprise applications.
Ready to dive deeper into these stories? Sit down, relax, and join us on a tour around the past weeks’ news!
Table of Contents
A Major Step Forward: OpenAI Launches GPT-4o, Available for All
After (almost exactly) 14 months from the release of GPT-4 (which happened on March 14, 2023, in case you forgot), OpenAI treats us with the big news about the new GPT-4o launch. And as if that’s not exciting enough, it’s available for free, for all!
GPT-4o is an enhanced version of GPT-4, promising improved performance and new capabilities. According to OpenAI, the upgraded model boasts better context understanding, faster response times, and increased accuracy. Moreover, it is equipped with multimodal capabilities (allowing it to process text, images, and audio simultaneously) and advanced personalization features that adapt responses based on user interaction history.
While the updated model is freely available to all users, paid subscribers receive up to five times the message capacity, which sounds like a reasonable solution for those using ChatGPT extensively for business or research. Such a solution ensures that while the general number of ChatGPT users may now grow, power users can maintain their workflow without interruption.
As exciting as these developments sound, the upcoming weeks will prove whether GPT-4o is indeed as groundbreaking as OpenAI claims. Will it redefine our experiences with generative AI, or is it another step towards more sophisticated yet still imperfect tools? As we explore this new landscape, it’s clear that the journey of AI evolution continues to surprise and challenge us.
You can learn all about the new GPT-4o here.
AI-Driven Cyber Threats Surge, FBI Warns
At the RSA cybersecurity conference in San Francisco, the FBI unveiled a concerning trend: cybercriminals are increasingly harnessing artificial intelligence for more sophisticated phishing and social engineering attacks. These AI-enabled schemes are not only faster and more scalable but also frighteningly convincing, with voice and video cloning used to impersonate people you trust.
The FBI’s message was clear: as technology advances, so do cybercriminals’ methods. They’re exploiting AI to create messages and media that can trick even the savviest users, leading to significant financial and data losses. The FBI urges businesses and individuals to stay vigilant, adopt multi-factor authentication, and regularly educate their teams about these high-tech scams.
When you think of it, it’s quite obvious that, considering AI’s great talents, criminals would be thrilled to use them for malicious purposes. It’s a crucial matter, yet it seems to be getting way less attention than it deserves (at least by now). There’s been much talk about data privacy and security in light of the quick pace of AI developments, but most of them focused on how AI models process information — not how their capabilities can be used for cybercrime.
If you want to read more about this topic, take a look at this article by Wall Street Journal.
OpenAI Introduces Tool to Detect Images Created by DALL-E 3
OpenAI presented a new tool that promises to identify images created by its DALL-E 3 model. The company claims the tool’s accuracy reaches around 98% — although the score drops if the images have been modified by cropping or color changes. The innovation was announced as part of a broader effort to address the surge of fake images influencing public perception, especially with the 2024 election campaigns heating up.
The concern is real: as generative AI tools like DALL-E 3 become more accessible, the potential for misinformation grows. To counter this, OpenAI is collaborating with industry giants like Microsoft and Adobe to establish standards for online image verification. They’re also launching a $2 million “societal resilience” fund to boost AI education and awareness.
However, the tool is still far from perfect. It struggles with images from rival AI products and certain alterations — plus, its inability to recognize edited images strongly suggests it can be fooled way too easily. OpenAI claims, however, that it is focused on improving the technology and is inviting external researchers to help refine it.
You can read more about OpenAI’s AI-generated image detection tool here.
Apple Nears Deal with OpenAI to Use ChatGPT in iOS 18
Apple is close to sealing a deal with OpenAI to integrate ChatGPT into the next iPhone update, iOS 18. This move is part of Apple’s larger strategy to boost its devices with more powerful AI features, showing its commitment to innovation while keeping user privacy and security at the forefront.
The company has also engaged with Google regarding its Gemini chatbot; however, these discussions remain unresolved (at least for now). Meanwhile, the upcoming Worldwide Developers Conference in June is set to reveal Apple’s latest AI developments. Many of these features will run on Apple’s own processors in data centers, ensuring that sensitive data stays on the device, away from third-party servers.
Tim Cook, Apple’s CEO, has shared his enthusiasm for AI’s potential but remains cautious. He ensures that new AI features will be introduced thoughtfully, maintaining Apple’s signature blend of hardware, software, and services. This approach reflects the company’s aim to deliver innovative solutions without compromising the trust and security that users expect.
Here’s the link to a full article on this matter.
“The Unsexy Future of Generative AI Is Enterprise Apps”
— says one of the WIRED’s headlines, and it’s hard not to click on it. The article discusses an interesting matter of gen-AI startups that launched their buzzy products when the boom started and which now are adjusting their offerings to make them more attractive to business clients.
Companies like Tome and Glean are leading examples, pivoting from broad consumer applications to narrowly focused enterprise solutions to secure sustainable revenue streams in a challenging market.
Tome, initially launched as a broad-use AI presentation tool, shifted its focus to sales and marketing teams, tripling its pricing to better align with business needs. Glean, founded with insights from a decade at Google, developed a workplace search engine tailored for complex corporate data systems.
This shift shows that AI startups are now prioritizing the need to create more reliable and compliant tools, especially in sectors like legal and medical, where accuracy is crucial. This pushes these companies to enhance their systems with stronger safeguards and compliance measures.
As AI startups navigate this evolving market, many are diversifying their technology sources away from giants like OpenAI, exploring alternatives like Anthropic’s Claude or Meta’s Llama 3, aiming for greater autonomy and competitive edge.
For more on this interesting shift in the AI startup world — read the full article here.
UK’s AI Safety Institute Launches “Inspect” to Enhance AI Models Safety
The UK AI Safety Institute recently released Inspect, a new toolset to aid the development of AI evaluations for industry, research organizations, and academia. Available under an open-source MIT License, Inspect assesses AI models’ core knowledge and reasoning abilities and provides a score based on these evaluations.
It’s the first such platform from a state-backed body for broad use. Ian Hogarth, chair of the AI Safety Institute, emphasized the Inspect’s role in promoting a unified approach to AI evaluations. He hopes it will become a fundamental resource for the global AI community to conduct their safety tests.
Given the complexities of AI benchmarking and the opacity of advanced models, Inspect is designed to adapt to new testing methods. It consists of datasets for evaluations, solvers to perform tests, and scorers to compile results.
Clément Delangue, CEO of Hugging Face, has proposed potentially integrating Inspect with Hugging Face’s model library or creating a public leaderboard to display the toolset’s evaluation results.
This release coincides with broader international efforts to refine AI model testing. Following a recent partnership between the US and the UK to advance AI model assessments, the US plans to establish its AI safety institute focused on evaluating risks from AI and generative AI technologies.
You can read the full AI Safety Institute press release here.
***
That’s it for this issue of our news digest. We hope you found the stories interesting and are ready for more! Don’t forget, we’ll have a new batch of AI news for you in two weeks.
Want to make sure you don’t miss out?
Sign up below, and we’ll notify you of new releases.