Jensen Huang, CEO of Nvidia, discusses the evolution of the company from a GPU provider to an AI factory, emphasizing the concept of disaggregated inference and the importance of allocating data center space to Grok LPU-GPU combos. Huang highlights three key computing systems: training, evaluation (Omniverse), and edge computing, and addresses concerns about the cost-effectiveness of Nvidia's inference factory, arguing that its superior throughput justifies the investment. The conversation explores the paradigm shift towards open-source AI agents like OpenCLAW, which Huang sees as reinventing computing with its memory system, resource management, and ability to run multiple applications. Addressing AI regulation, Huang stresses the need for policymakers to understand AI's capabilities and limitations, advocating for the diffusion of AI technology in the United States. He also touches on Nvidia's involvement in healthcare, robotics, and the potential for AI to revolutionize various industries.
Sign in to continue reading, translating and more.
Continue