“LLM AGI may reason about its goals and discover misalignments by default” by Seth Herd | LessWrong (30+ Karma) | Podwise