This podcast episode explores the significance of long context links in AI models' intelligence and the importance of understanding the success rate and economic impact of long horizon tasks. The conversation delves into the role of association and attention in intelligence and memory, as well as the potential of long context evals in AI models. It also discusses the process of making AI models better through experimentation and iteration, the importance of compute in accelerating AI research progress, and the concept of chain of thought in training AI models. The speakers explore the potential landscape of AI agents, the concept of scaling in models, and the speaker's approach to problem-solving. The episode also highlights the value of serendipitous encounters and demonstrates the importance of demonstrated ability and agency in the hiring process. The conversation delves into the concept of associations in models, the behavior of circuits in language models, and the speaker's work on superposition. It also explores the concept of specialization in mixture models and disentanglement of neurons. The episode concludes with a discussion on the surprising nature of intelligence and the challenge of understanding model behavior.