This podcast interviews Dan Hendrycks, an AI researcher, about AI safety and its geopolitical implications. The conversation covers the roles of AI labs versus independent safety organizations, the differences between AI alignment and safety, and the potential weaponization of AI in cyber warfare, biological weapons development, and drone technology. Hendrycks proposes a "Mutually Assured AI Malfunction" (MAME) deterrence regime, drawing parallels to nuclear deterrence, to prevent a destabilizing AI arms race. He argues that while some basic safety measures are feasible for companies, broader geopolitical considerations and the inherent difficulties in controlling AI development significantly complicate efforts to mitigate risks.