This interview podcast features Lex Fridman discussing China's DeepSeek AI models with Dylan Patel and Nathan Lambert. The conversation begins with an explanation of DeepSeek V3 and R1, focusing on their open-weight nature and the differences between instruction models and reasoning models. The discussion then delves into the technical aspects of pre-training and post-training, including mixture-of-experts models and the cost-effectiveness of DeepSeek's approach. Specific details on DeepSeek's low training and inference costs are provided, along with an analysis of the hardware used and the geopolitical implications of open-weight models. The podcast concludes with a discussion of the future of AI, including the potential for a technological Cold War and the role of human oversight in the development of increasingly autonomous AI systems. A key takeaway is the significant cost reduction achieved by DeepSeek in AI model training and inference, achieved through architectural innovations like mixture-of-experts models and multi-head latent attention.