
MIT CSAIL’s DisCIPL uses LLMs for planning, delegating to small models for complex reasoning—outperforming GPT-4o with less compute.
MIT CSAIL researchers developed DisCIPL, a method where a large LLM plans solutions for complex reasoning tasks, then distributes execution to smaller language models (LMs). Published December 12, 2025, it enables small LMs to surpass GPT-4o accuracy while approaching o1 precision, using far less compute and energy.
Lead author Gabriel Grand notes DisCIPL improves inference efficiency for constraint-based outputs, addressing rising energy demands of LMs. Experts like UC Berkeley’s Alane Suhr praise its parallelization for lower latency, fewer parameters, and gains in transparency and controllability—key for deploying efficient, interpretable AI.
Source: news.mit.edu