DigiNews

Tech Watch Articles

← Back to articles

Recursive Language Models

Quality: 8/10 Relevance: 9/10

Summary

Recursive Language Models (RLMs) introduce a framework where language models can decompose and recursively interact with input context through a REPL-like environment to handle unbounded context and mitigate context rot. The approach defines an RLM as a wrapper around a language model that can spawn recursive LM calls within an environment (e.g., a Python REPL) storing the context and allowing the root LM to delegate sub-queries to recursion. The authors present a depth-1 instantiation and report results on benchmarks such as OOLONG and BrowseComp-Plus, showing RLMs can outperform single-model baselines (GPT-5 / GPT-5-mini) in long-context scenarios and at a potentially lower cost. They also highlight observed interaction patterns (peeking, grepping, partitioning, map/summarization) and discuss limitations (speed, lack of asynchronous execution, cost control) and future directions for scalable inference.

🚀 Service construit par Johan Denoyer