
Google Cloud CEO Thomas Kurian details how the company maintains a competitive edge in AI infrastructure by leveraging proprietary TPU silicon and long-term capital planning. By controlling the full stack—from chip manufacturing to data center deployment—Google avoids the compute constraints faced by other frontier labs while simultaneously monetizing tokens and hardware for third-party competitors. The discussion highlights the shift toward agentic workflows, where AI models autonomously manage tasks across enterprise systems, necessitating specialized inference chips and ultra-low latency storage. Kurian emphasizes that Google’s strategy balances internal model development with broad platform services, including continuous red teaming and automated code repair to address cybersecurity risks. As demand for compute grows, Google prioritizes energy efficiency and distributed data center deployment to sustain long-term growth and support the next generation of 10-trillion-parameter models.
Sign in to continue reading, translating and more.
Continue