Welcome to the latest edition of Neoteric AI News Digest. Today, we explore some controversial and promising developments in AI, highlighting both the challenges and breakthroughs in this rapidly evolving field.
In this issue, we dive into Microsoft’s new Recall AI feature for Windows 11, which has sparked a Reddit discussion around privacy and safety. We also examine a report revealing how easily major AI models can be jailbroken, discuss Google’s amusing yet troubling AI Overview fail, and take a look at the matter of AI’s growing impact on the environment. On a positive note, we’ll cover an exciting new AI tool for detecting metastatic breast cancer and OpenAI’s news regarding custom GPTs.
Table of Contents
Windows 11 Recall AI: A Convenience or A Serious Privacy Threat?
Microsoft’s new AI-powered feature, Recall, for Windows 11 is raising concerns in Reddit discussion. The feature is meant to record everything you do on your PC — follow your activity and capture screenshots of your active window every few seconds, then store snapshots for future reference — allowing you to search through your historical activities with ease.
While this may sound convenient, it also raises significant privacy concerns. Microsoft assures that data is encrypted and stored locally, but the debate over data privacy and security seems fully justified.
The discussion on Reddit reflects a broad spectrum of concerns. Many users are alarmed by the potential for privacy invasion, fearing that such a feature could be misused by malicious actors or lead to unintended data breaches. Others point out that, despite Microsoft’s assurances, the sheer volume of data collected is concerning and could overwhelm users’ trust in the platform.
Curious to know more? Read the full article here.
Join the conversation on Reddit: Say Goodbye to Privacy.
Major AI Models Easily Jailbroken: New Report Raises Security Concerns
In light of a new report by the AI Safety Institute, the worries of Reddit users regarding the new Windows 11 Recall feature make even more sense. The report reveals that major AI models are easily jailbroken and manipulated, raising serious security and ethical concerns. The AI Safety Institute found that advanced AI models can be tricked into bypassing their safety measures, providing potentially harmful outputs. This vulnerability underscores the need for robust security frameworks as AI technologies become more integrated into our daily lives.
The AI Safety Institute’s May update highlights the fragility of leading large language models (LLMs) such as GPT-4, Google’s Bard, and others. These models can be coerced into producing outputs that violate their intended ethical guidelines. Methods like adversarial prompts and subtle input tweaks can easily jailbreak these AI systems, leading them to perform unauthorized actions or generate inappropriate content.
This report serves as a stark reminder of the importance of prioritizing AI security. As generative AI continues to evolve, ensuring the integrity and trustworthiness of these systems must remain a top priority.
Read the full article on Mashable.
Google AI Overview Fail: From Great Promises to Glue on Pizza
As we are on AI raising concerns, here’s a compelling story of Google AI Overview hitting some laughable lows. Touted as a major improvement in search technology, the Google new product was supposed to enhance search experiences by providing accurate and concise summaries. However, users quickly found themselves puzzled by the bizarre suggestions it was popping up with.
Social media is abuzz with examples of the assistant saying weird stuff, from telling users to put glue on their pizza to suggesting they eat rocks. It has also claimed that former US President James Madison graduated from the University of Wisconsin 21 times and suggested that a dog has played in the NBA, NFL, and NHL.
These errors once again sparked a broader discussion about the reliability and accuracy of AI-generated content. Despite extensive testing before launch, Google”s AI Overview rollout has been messy, highlighting the challenges of deploying AI at scale. Add to it that Google is now scrambling to remove these bizarre responses manually.
Google spokesperson Meghann Farnsworth stated that the mistakes often come from uncommon queries and are not representative of most users’ experiences. However, the frequency of reported errors still raises questions about the product’s reliability.
As Google works to improve its AI systems, this incident serves as a reminder of the limitations and potential pitfalls of relying too heavily on AI for information retrieval. For now, we’d advise taking AI-generated search results with a grain of salt — and perhaps, also sticking to it in your cooking instead of using glue.
Wanna dive deeper into the full story? You can start with this article from TechCrunch and then move on to The Verge’s first take and the follow-up piece.
Microsoft’s Emissions Spike 29% as AI Gobbles Up Resources
It appears that this issue of Neoteric AI News Digest portrays AI in a rather unfavorable light, but here’s another topic that deserves some attention. While AI technologies are driving innovation and transforming industries, they also come with significant environmental costs that cannot be ignored.
Microsoft’s 2024 Sustainability Report highlights a significant downside to the AI boom: a 29% increase in emissions and a 23% rise in water consumption in 2023. The surge in resource use is primarily due to the high demands of generative AI technologies.
Microsoft’s extensive investments in AI, including integrating GPT-4 into Bing and launching the Copilot AI assistant, have driven up energy consumption. Data centers, essential for these AI workloads, rely heavily on water for cooling, with consumption spiking from 6.4 million cubic meters in 2022 to 7.8 million in 2023. This increase presents new challenges for Microsoft’s sustainability goals.
The company acknowledges the environmental impact and is working on innovative solutions, such as zero-water cooling data centers and water recycling projects, especially in high-stress areas like the Colorado River Basin. These efforts aim to mitigate the negative effects while supporting the growing demands of AI.
Check out the full article on PCMag.
New AI Tool to Detect Possible Metastatic Breast Cancer
Okay, to not make it all so negative, let’s discuss some positive news from the AI world before we wrap up today’s AI News Digest issue.
Here’s a pretty great one: researchers at UT Southwestern Medical Center have developed an innovative AI tool to improve metastatic breast cancer detection! This new model, which utilizes machine learning alongside standard MRI, is designed to identify cancer cells in axillary lymph nodes with high accuracy.
In clinical tests, the AI tool significantly outperformed traditional imaging methods. It successfully identified 95% of patients with axillary metastasis and helped avoid 51% of unnecessary biopsies. It’s a huge step forward in breast cancer care that can reduce the need for invasive procedures and provide a noninvasive, reliable method for detecting metastasis. Which, in turn, could lead to better treatment plans and improved survival rates for patients.
Dr. Rohit Sharma, the project’s lead researcher, emphasizes the tool’s importance in the clinical setting. He notes that early detection of metastasis is crucial for effective treatment, and this AI model offers a promising solution for healthcare providers.
UT Southwestern’s commitment to integrating advanced technologies into medical practice once more showcases AI’s potential to transform healthcare. As this tool undergoes further validation and refinement, it holds the promise of becoming a standard part of breast cancer diagnostics.
For more detailed information, read the full article on UT Southwestern’s website.
Custom GPTs Now Available for Free ChatGPT Users
Last but not least, here’s one more good news (released today): custom GPTs are now available for free ChatGPT users. Previously reserved for paid subscribers, features like custom GPTs, data analytics, vision, and memory are now accessible to everyone using ChatGPT.
Free users can now explore and use a variety of custom GPTs, such as on-demand thesauruses and shopping guides. However, creating custom GPTs remains a privilege for paid subscribers, who can also participate in a revenue-sharing scheme initiated by OpenAI.
Additionally, free users can utilize data analytics and chart creation tools, connecting OneDrive and Google Drive for faster analysis and customizable charts. This move democratizes powerful AI tools, enhancing the user experience across the board.
Despite these new offerings, paid subscribers still enjoy the benefit of fewer message limits. When free users reach the conversation limit with GPT-4o, they automatically revert to GPT-3.5.
You can read more on this on The Verge.
***
Hope you enjoyed this issue of Neoteric AI News Digest and you’ll be coming back for more. A fresh dose of AI updates will await you here in two weeks. Want to be notified once it’s released? Sign up below, and we’ll make sure you won’t miss out!
Want to make sure you don’t miss out?
Sign up below, and we’ll notify you of new releases.