
This webinar, hosted by Together AI, features Dylan Patel from SemiAnalysis, discussing NVIDIA's Blackwell architecture and its implications for AI workloads. Patel highlights Blackwell's improved performance and cost-effectiveness compared to previous generations, detailing architectural advancements and the challenges of optimizing software for full utilization. The conversation covers the rollout of Blackwell, comparing it to Hopper, and strategies companies are adopting regarding long-term commitments, as well as the impact of new system architectures on data centers, including power, cooling, and retrofit considerations. The discussion further explores the role of specialized inference silicon, the evolving landscape of AI companies balancing model training and inference, and the emergence of Neo Clouds as a response to the changing infrastructure needs of the AI revolution.
Sign in to continue reading, translating and more.
Continue