This episode explores the future of enterprise AI adoption and the evolving models for its deployment, focusing on the challenges and opportunities in integrating generative AI into businesses. Against the backdrop of various deployment models—consulting, product-led, and out-of-the-box solutions—the discussion highlights the likely success of a hybrid approach that combines support with customizable platforms. More significantly, the conversation delves into the crucial role of data, emphasizing the shift from RLHF data to expert labeling and synthetic data generation for enhancing model capabilities. For instance, the application of generative AI in healthcare (note-taking for doctors) and finance (research for wealth managers) is discussed as having achieved product-market fit. As the discussion pivoted to model architecture, the limitations of the current transformer architecture were highlighted, along with the need for models capable of learning from experience. In conclusion, the interview underscores the importance of vertical integration in building successful AI applications, emphasizing the advantages of companies developing both models and applications, and touches upon the potential for future model milestones such as improved reasoning and learning from user interactions.