
In this monologue podcast, Marina Wyss provides a high-level overview of AI Engineering, explaining its core concepts in a simplified manner. She differentiates AI Engineering from machine learning engineering, focusing on the use of foundation models and their adaptation for specific applications. The podcast covers key topics such as large language models (LLMs), the transformer architecture, attention mechanisms, and the importance of parameters and hyperparameters. It also delves into prompt engineering, fine-tuning, quantization, distillation, Retrieval Augmented Generation (RAG), embeddings, agents, inference, and model evaluation metrics like perplexity, Blu, and Rouge. The episode also includes a sponsorship message for Datacamp, highlighting their AI engineering tracks.
Sign in to continue reading, translating and more.
Continue