In this episode of The Ezra Klein Show, Ezra Klein interviews Eliezer Yudkowsky, an early voice warning about the existential risks of AI, about his new book, "If Anyone Builds It, Everyone Dies." They discuss the nature of AI development, the alignment problem, and the potential for AI to act in unpredictable and potentially harmful ways. Yudkowsky argues that the pursuit of increasingly powerful AI systems, driven by competition and profit motives, is likely to lead to catastrophic outcomes, including human extinction, because AI's goals may not align with human values. He advocates for halting AI development and implementing measures like tracking GPUs to prevent the creation of superintelligence.
Sign in to continue reading, translating and more.
Continue