This episode explores the convergence of artificial intelligence (AI) and extended reality (XR), specifically focusing on Google's advancements in this field. Against the backdrop of a 25-year journey in augmented reality research, Shahram Izadi discusses the integration of AI assistants like Gemini with XR hardware, such as glasses and headsets. More significantly, a live demo showcases the capabilities of these glasses, including real-time haiku generation, object recognition, language translation (even handling a Farsi translation attempt), and contextual awareness. For instance, the AI assistant successfully identifies a book title from a glimpsed image and provides directions to a park. The demonstration then shifts to headsets, highlighting how AI enhances user interaction within immersive environments, enabling tasks like trip planning and 3D exploration. In contrast to previous AI demonstrations, this integration allows for seamless control through voice, eyes, and hands, creating a more natural and intuitive experience. What this means for the future is a world where AI augments human intelligence, offering personalized and contextually aware assistance through increasingly wearable XR devices.