
Recursion at inference time offers a path to improve model reasoning performance beyond simply increasing model size. Traditional LLMs operate as one-shot feed-forward processes, limiting their ability to solve complex, multi-step problems like sorting or Sudoku. Hierarchical Reasoning Models (HRM) and Tiny Recursive Models (TRM) address this by utilizing recursive loops and latent memory states, effectively creating a computational "tape" similar to a Turing machine. By employing truncated backpropagation through time and deep equilibrium learning, these architectures achieve state-of-the-art results on tasks like the ArcPrize with significantly fewer parameters than standard transformers. Integrating these recursive, compute-efficient reasoning methods into large-scale models promises to overcome the current limitations of token-based reasoning, enabling more robust, general-purpose AI agents capable of discovering complex algorithms from first principles rather than relying solely on pre-trained human knowledge.
Sign in to continue reading, translating and more.
Continue