This episode explores the Model Context Protocol (MCP), a new open-source communication protocol designed to enhance the capabilities of large language models (LLMs). Against the backdrop of existing challenges in integrating LLMs with internet services, MCP offers a standardized framework, simplifying the process of connecting AI applications to online tools and databases. More significantly, the discussion highlights MCP's potential to accelerate the development of agentic AI, enabling AI systems to interact with the web more efficiently by directly accessing APIs rather than relying on simulating human-like web browsing. For instance, using the CoinMarketCap API for real-time crypto pricing is presented as a more efficient alternative to web scraping. As the discussion pivoted to the future of AI, the panelists debated the relative merits of different approaches, such as training LLMs to use computers like humans versus leveraging APIs via MCP. In contrast to the technical aspects, the conversation also touched upon the competitive landscape of the AI industry, noting the challenges faced by Meta in developing its LLaMA models and the rising prominence of standalone AI apps like Grok. Emerging industry patterns reflected in the discussion include the increasing importance of open-source protocols, the ongoing race for consumer adoption, and the potential for AI to revolutionize e-commerce through intelligent shopping assistants.