
The podcast explores the current state and future trends of Large Language Models (LLMs), particularly focusing on advancements since last year and expectations for 2026. Independent LLM researcher Sebastian Raschka highlights the shift towards post-training techniques, such as reasoning and tool use, as key areas of development. The discussion covers the practical applications of LLMs, including code improvement and workflow automation, with both the host and guest sharing their experiences using custom tools. They address the importance of verifiable rewards in training and the potential for expanding verification paradigms beyond math and code. The conversation also touches on inference scaling techniques like self-consistency and self-refinement, and the challenges and opportunities in agentic uses of LLMs, including multi-agent systems.
Sign in to continue reading, translating and more.
Continue