This podcast episode explores the advancements in text-to-image models, focusing on the key role of techniques like latent diffusion and the increased accessibility of computer power and data. It discusses the origins and evolution of Stable Diffusion as a groundbreaking image generation model and examines the landscape of open source image generation models. The episode also delves into Runway's approach to open-sourcing and video generation, highlighting its focus on improving models and addressing challenges like temporal consistency. The capabilities and architecture of RunwayML's video generation model are explored, as well as the ongoing advancements in AI-generated video content and its impact on filmmaking. The episode concludes by discussing the transformative impact of technology on storytelling and the emergence of a new art form, emphasizing the importance of education and adapting to these advancements. Runway's custom model fine-tuning capabilities, the challenges of consistency in AI-generated content, and the company's unique approach to art and technology are also discussed, along with the impact of Runway on various industries and communities. The episode highlights the Runway AI Film Festival and the upcoming release of the next generation of generative AI models.