This podcast episode features GeoHot discussing his journey and the reasons behind his involvement with ML compute. The conversation delves into the challenges faced by existing AI chips and the significance of developing performant ML frameworks. GeoHot introduces TinyGrad, a project aimed at creating a smaller and less complex instruction set for ML models. The speakers also discuss optimization techniques in PyTorch, the efficiency and competitiveness of TinyGrad, the challenges of open-source development, the limitations of GPUs, and the potential of federated training. They explore topics such as mixture of experts, language models, research career choices, remote hiring, AI tools, the Kanban board, and the future of AI. The episode concludes with discussions on the company's goals, AI alignment, information theory, and differentiating dreams based on physics and information. Overall, the episode provides valuable insights into ML compute, optimization, hardware design, open-source development, and the future of AI.