This episode explores the increasing importance of compute power in the context of the rapidly evolving AI landscape. Against the backdrop of the initial release of ChatGPT, where compute was not a major discussion point, the conversation highlights how the rise of generative AI and large language models (LLMs) has brought the concept of GPUs and compute to the forefront. More significantly, the recent surge in open-source models closing the gap with proprietary models has made access to sufficient compute a critical priority for even medium-sized businesses, previously unconcerned with this aspect. For instance, the discussion details how DistributeAI addresses this challenge by aggregating spare computing power from various sources, creating a more affordable and accessible AI ecosystem. The conversation further delves into the limitations of current chip technology, explaining why even major tech companies struggle to meet the ever-growing demand for compute power, despite the simultaneous trend of models becoming both smaller and more powerful. As the discussion pivoted to the future, the hosts and guest speculated on the potential for edge computing to alleviate some of the strain on centralized systems, with the possibility of powerful LLMs running on smartphones within the next five years. Ultimately, the episode concludes with the observation that the race between open-source and closed AI models is fundamentally reshaping the compute landscape, and that business leaders should prioritize flexibility and adaptability in their AI strategies to navigate this rapidly changing environment.