Jeremy Nixon, an AI entrepreneur, discusses the evolution and future of AI with Steve Hsu. Nixon argues that foundation models have already achieved AGI, disagreeing with the apocalyptic framing of AI risk prevalent in some rationalist communities. He traces the history of AI safety concerns, noting Elon Musk's early fears about DeepMind and the origins of OpenAI. Nixon critiques the "heroic ideology" driving existential risk concerns, suggesting a focus on "P-Life"—maximizing the probability of everyone living—rather than solely minimizing existential threats. He envisions a future where AI drives scientific progress, particularly in personalized medicine, but acknowledges potential societal and political obstacles to technological advancement.
Sign in to continue reading, translating and more.
Continue