
The podcast explores the concept of recursive self-improvement (RSI) in AI, where AI autonomously builds the next generation of AI. Mostafa Dehghani, a top AI researcher at Google DeepMind, explains that this process is already underway in many labs, with new models being built using previous generations. A key challenge lies in evaluation—measuring the quality of AI-generated improvements—and ensuring models remain grounded in reality to avoid "model collapse," where AI loses generalization. The conversation also covers continual learning, NanoBanana2, and the balance between specialized and generalized AI models, and the importance of providing AI systems with real-world data to enhance their capabilities.
Sign in to continue reading, translating and more.
Continue