This episode explores the largely unnoticed release of Google's Gemini 2.5 Pro large language model, which the host, Jordan Wilson, considers the best he's ever used. Against the backdrop of recent significant AI news, including Runway's Gen 4 video generator and OpenAI's record-breaking funding round, Wilson highlights Gemini 2.5 Pro's unprecedented performance across various benchmarks and its exceptional human preference scores. More significantly, the model boasts a one-million-token context window, enabling it to remember vast amounts of information during a conversation and significantly improving its capabilities in advanced coding. For instance, Wilson demonstrates its ability to generate complex code and even create a simple game with a single prompt. This technological leap, coupled with its free availability, positions Google as a leader in the emerging field of "thinking" AI models, though Wilson notes some initial bugs and the need for further testing. What this means for the future of AI is a shift towards more powerful, readily accessible models, potentially diminishing the importance of RAG (Retrieval Augmented Generation) in some applications.