Ezra Klein interviews Eliezer Yudkowsky, an early voice warning about the existential risks of AI, about his new book, "If Anyone Builds It, Everyone Dies." Yudkowsky expresses his concerns about the potential for AI to destroy or displace humanity, emphasizing that AI systems are not fully understood and can exhibit unexpected and potentially dangerous behaviors. He argues that the alignment project, which aims to ensure AI wants what humans want, is falling behind the rapid development of AI capabilities. They discuss the alienness of AI, the difficulty of controlling its goals, and the potential for misalignment to lead to catastrophic outcomes, even with safety measures in place. Yudkowsky advocates for building an "off switch" to control AI development and recommends books that have shaped his thinking on AI and rationality.
Sign in to continue reading, translating and more.
Continue