“Human-like metacognitive skills will reduce LLM slop and aid alignment and capabilities” by Seth Herd | LessWrong (30+ Karma) | Podwise