In this interview podcast, the host speaks with Nick Joseph, Head of Pre-Training at Anthropic, about the basics of pre-training AI models, Anthropic's strategies regarding data, alignment, and infrastructure, and how advances in AI directly result from progress in pre-training. Nick shares his background, including his time at Vicarious and OpenAI, and discusses the evolution of pre-training, the importance of compute, and the challenges of scaling AI models. The conversation covers topics such as next word prediction, model architecture, infrastructure, data quality, alignment, and the future of AI, including potential paradigm shifts and the role of startups in the AI ecosystem.
Sign in to continue reading, translating and more.
Continue