“ASI existential risk: Reconsidering Alignment as a Goal” by habryka | LessWrong (30+ Karma) | Podwise