Go back

MIT's DisCIPL Enables Small LMs to Match GPT-4o in Reasoning Efficiency

MIT's DisCIPL Enables Small LMs to Match GPT-4o in Reasoning Efficiency

MIT CSAIL’s DisCIPL uses LLMs for planning, delegating to small models for complex reasoning—outperforming GPT-4o with less compute.

MIT CSAIL researchers developed DisCIPL, a method where a large LLM plans solutions for complex reasoning tasks, then distributes execution to smaller language models (LMs). Published December 12, 2025, it enables small LMs to surpass GPT-4o accuracy while approaching o1 precision, using far less compute and energy.

Lead author Gabriel Grand notes DisCIPL improves inference efficiency for constraint-based outputs, addressing rising energy demands of LMs. Experts like UC Berkeley’s Alane Suhr praise its parallelization for lower latency, fewer parameters, and gains in transparency and controllability—key for deploying efficient, interpretable AI.

Source: news.mit.edu


Share this post on:

Previous Post
OpenAI Launches GPT-5.2 Series with Enhanced Reasoning and Coding
Next Post
Anthropic Releases Claude Opus 4.5, Outperforms Humans in Engineering Tests

Related Posts