Sign up for the webinar with Julien Simon, Chief Evangelist at Hugging Face, and learn about fine-tuning gen AI models for specific tasks and use cases. See when it’s worth using a fine-tuned model, what it takes to specialize a foundation model for various applications, and how to ensure compliance, security, and privacy in a corporate environment. Get ready to successfully adopt generative AI.
Let’s face it: there’s no such thing as a perfect AI model. No model exists that is 100% accurate all the time. Yet, when implementing AI-powered solutions, we all want them to be reliable. So, how to make them perform better for our use cases of choice?
In the next episode of the AI Talks, we will talk about training generative AI models for specific tasks and try to find out when fine-tuned models are better than foundation models and how to use them in the enterprise environment.
Julien is the Chief Evangelist at Hugging Face, helping enterprise customers figure out the AI and Large Language Model landscape and working with strategic partners. He previously spent 6 years at Amazon Web Services as the Global Technical Evangelist for AI & Machine Learning. Before that, he served 10 years as CTO/VP of Engineering in large-scale startups. Julien is a frequent speaker at technical, industry, and company events. Last but not least, his brutally honest blog posts and videos seem to be quite popular.
The founder of Neoteric, where we help enterprises innovate with AI. Founder of 3 VC-backed startups. Responsible for the TechSeed strategy and acceleration program. Matt has been experimenting with AI since 2008, leading commercial implementations since 2017 with experience across different industries such as telecom, healthcare, and education.