Welcome to the first bi-weekly news digest by Neoteric! In this new series, we’ll regularly serve you a condensed roundup of the hottest news from the tech and AI world, sparing you from time-consuming wanders around the depths of the internet and lengthy reads. From now on, you can just stop by our blog and find all the essential updates waiting for you.

What’s been happening over the past two weeks? Meta has rolled out its next generation of the open-source Llama 3 model, while Google has enhanced its Gemini 1.5 Pro with the ability to *hear*. But that’s not all! Stanford University released the 2024 edition of the AI Index report full of valuable insights, and USC Marshall published a study that suggests AI can outperform humans at providing emotional support. The latter could be an interesting point in the debate around Generative AI’s readiness to venture deeper into the healthcare industry, which we also cover here. Curious to know more? Grab a coffee and dive in!

Source: Meta

Meta Unveils Llama 3, Sparking Open Source Debate in Generative AI

Meta has officially launched Llama 3, its latest generative AI model, with significant upgrades and widespread support across major cloud services like AWS and Google Cloud. This release introduces two advanced models in the Llama series, featuring 8 billion and 70 billion parameters, setting new benchmarks in generative AI (at least according to Meta!).

Despite the open-source label, the rollout of Llama 3 has sparked a debate regarding the true openness of such generative AI models. Critics point out that, while accessible, Llama 3 comes with notable licensing restrictions that could hinder its use, especially for developers managing platforms with extensive user bases who must navigate additional licensing hurdles. This development highlights the ongoing tension between fostering accessible AI innovation and the protective measures implemented by AI giants. The debate over Llama 3 highlights the challenges of open-source work in the generative AI field. It shows wider worries about how tech progress is shared and its effects on society and democratic values.

For technology enthusiasts and professionals in the AI industry, Llama 3 represents not only an advancement in generative AI capabilities but also a critical reflection on the evolving nature of open-source philosophy in a domain dominated by a few powerful entities.

Read all about Meta’s Llama 3 and its impact on the open-source generative AI landscape.

AN illustration to the news about Stanford AI Index report

Stanford AI Index 2024: Navigating the Future of Generative AI

Stanford University’s Institute for Human-Centered AI (HAI) has unveiled the 2024 edition of its AI Index, providing a critical snapshot of the burgeoning impact and challenges within the generative AI sector. This year’s report underscores AI’s growing influence, particularly in generative AI, which has seen unprecedented investment and technological strides, albeit with soaring costs and emerging regulatory challenges.

The AI Index highlights the dual nature of AI development: smarter systems capable of surpassing human performance in tasks like image classification and language understanding, yet still struggling with complex challenges such as advanced mathematics and commonsense reasoning. The report reveals that while the number of new LLMs has doubled, the most effective models remain predominantly under industry control, not open source.

Costs are a notable concern, with training expenses for models like OpenAI’s GPT-4 reaching into the tens of millions. Meanwhile, investment in Gen AI technologies has exploded, totaling $25.2 billion in the last year alone, signaling both enthusiasm and high stakes for the future of generative artificial intelligence.

However, the report criticizes the industry for a lack of transparency and standardized responsible AI practices, highlighting significant gaps in how AI systems’ safety and robustness are understood and reported. With a sharp increase in AI-related regulations and rising concerns about intellectual property violations by generative models, the AI landscape is poised at a critical juncture between innovation and accountability.

Explore the full AI Index 2024 Report by Stanford HAI here.

An illustration to the news titled: Impressive or Unsettling? AI's Role in Emotional Support

Impressive or Unsettling? AI’s Role in Emotional Support

In an era where genuine empathy and understanding are all too rare, AI is stepping into an unexpected role: emotional supporter. According to a recent study by the USC Marshall School of Business, AI has demonstrated an ability to provide emotional support that surpasses human efforts in some key areas. Researchers found that AI-generated messages made participants feel more understood than those crafted by humans, thanks to AI’s capability to analyze language and emotional cues without bias.

However, the study also reveals a concerning twist. When people found out the comforting messages were from AI, they felt less understood. This response is similar to the ‘uncanny valley’ effect seen in robotics, where something almost human but not quite can be unsettling. It shows there’s a real psychological hurdle we need to overcome if we’re going to welcome AI into more personal areas of our lives.

While the research suggests that AI won’t replace human companions anytime soon, it points to a significant potential for AI to augment human emotional support. This could lead to enhanced tools for mental health, improved communication, and greater accessibility to emotional support services, especially for those with limited social resources.

As AI continues to evolve, the study poses critical questions: Will our discomfort with AI’s near-human capabilities diminish over time, or will the “uncanny valley” continue to challenge the acceptance of AI in roles demanding deep emotional understanding? 

Here’s the link to a full article on this matter.

Source: Google

Google’s Gemini 1.5 Pro Gains the Ability to Hear

It seems the past two weeks have brought us a series of “mixed feelings” news, and Google’s latest update to Gemini 1.5 Pro is no exception. While it’s exciting to see AI capabilities advance, the idea that it can listen might leave some of us uneasy.

Announced at Google Next, Gemini 1.5 Pro can now process audio directly, extracting information from sources like video soundtracks or earnings calls without needing a transcript. Available on Google’s Vertex AI platform, the model handles up to 1M tokens, allowing it to analyze extensive documents and hold long, detailed conversations. It’s about four times what similar models can manage!

But it’s not just about data crunching. The model’s ability to listen adds a new layer of functionality, enhancing its application in diverse fields, from entertainment to healthcare. For example, it could transcribe medical instructions from doctors or analyze therapy sessions, paving the way for more nuanced AI involvement in healthcare.

Although impressive, Gemini Pro 1.5 is not yet perfect. It still faces challenges, such as processing delays and varying transcription quality. Google claims, however, that it’s committed to refining the model and ensuring it integrates smoothly into professional environments, including healthcare.

As Gemini 1.5 Pro evolves, it promises to change how we interact with data and utilize AI in practical, everyday applications. But it also underscores the dual nature of AI development — remarkable yet sometimes disconcerting — and raises the question of how comfortable we should be with such advancements.  

Wanna know more about it? You can read the full article here.

An illustration to the news about the growing role of AI in healthcare

Generative AI in Healthcare: Innovation Meets Caution

Speaking of AI’s involvement in the healthcare industry. It’s not the matter of the future anymore — AI is stepping into it boldly, backed by big names like Google Cloud and Microsoft Azure who aim to streamline everything from patient intake to message triaging. However, not everyone’s convinced Gen AI is ready for critical healthcare applications.

There’s undoubtedly some excitement about AI’s potential to make healthcare more efficient, but concerns remain about its ability to handle complex medical situations without errors. Studies show that AI can misdiagnose diseases or perpetuate biases, raising red flags about its current capabilities.

Critics urge caution, highlighting significant risks like privacy breaches and the potential misuse of sensitive medical data. As generative AI ventures deeper into healthcare, the industry faces a balancing act: embracing AI’s potential while managing its risks responsibly.

As we explore AI’s role in healthcare, the key question remains: Should we be excited or worried? This debate continues as the sector navigates the promising yet precarious path of AI integration.

You can read more about it in this article from TechCrunch.

***

That’s all for this week’s news digest. We hope you liked the read and now you’re eager to come back for more! Remember — a fresh batch of tech and AI updates will be waiting for you here every two weeks.

Want to make sure you don’t miss out?
Sign up below, and we’ll notify you of new releases.