The discussion centers on AI safety, existential risk, and the future of AI development with Jesse Hoogland, a leading researcher in AI safety. Hoogland details his path to AI safety research, driven by a growing sense of dread about AI's rapid advancement and potential misuse. He argues that AI's increasing capabilities and the uncertainty surrounding its motivations pose significant risks, including potential existential threats to humanity. The conversation explores the divide between theoretical and empirical approaches to AI safety, the role of philanthropic funding in the field, and the challenges of aligning AI with human values. Hoogland also touches on the importance of understanding AI internals and the potential for both positive and negative outcomes as AI becomes more integrated into society.
Sign in to continue reading, translating and more.
Continue