In this episode of The MAD Podcast, Matt Turck interviews Sebastian Bourgeaud, pre-training lead on Gemini 3 at Google DeepMind, about the architecture and capabilities of Gemini 3, the shift from a data-unlimited to a data-limited regime, and the organization of research teams at DeepMind. Bourgeaud discusses the incremental improvements leading to Gemini 3's advancements, the importance of building a system around neural networks, and the increasing productivity gains from each new model generation. He shares his background, his role in coordinating pre-training efforts, and his perspective on the balance between short-term and long-term research goals. The conversation also covers the role of scaling laws, the use of synthetic data, the importance of long context capabilities, and the challenges of evaluating pre-training models.
Sign in to continue reading, translating and more.
Continue