Reasoning Models Sometimes Output Illegible Chains of Thought | LessWrong (30+ Karma) | Podwise