“Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI” by Kaj_Sotala | LessWrong (30+ Karma) | Podwise