This podcast episode addresses the concerns of current and former Open AI employees regarding the risks and lack of transparency in the development of artificial general intelligence (AGI). The importance of interpretability research in understanding AI systems, the potential dangers of AI systems exceeding human intelligence, and the need for thorough safety considerations are highlighted. The episode also explores the challenges of ensuring safety within AI labs and the tension between pressure to release AI models and the importance of addressing known problems. The lack of attention to safety concerns, the importance of confidentiality and incentive structures, and the significance of establishing anonymous processes for raising risk-related concerns are discussed. The episode concludes with principles aimed at protecting whistleblowers and the importance of independent evaluation to ensure public safety.