Training LLMs for Divide-and-Conquer Reasoning
Xiao Liang, Zhong-Zhi Li, Zhenghao Lin, Eric Hanchen Jiang, Hengyuan Zhang, Yelong Shen, Kai-Wei Chang, Ying Nian Wu, Yeyun Gong, and Weizhu Chen, in ACL, 2026.
CodeDownload the full text
Abstract
Large language models (LLMs) have demonstrated strong reasoning capabilities through step-by-step chain-of-thought (CoT) reasoning. Nevertheless, at the limits of model capability, CoT often proves insufficient, and its strictly sequential nature constrains test-time scalability. A potential alternative is divide-and-conquer (DAC) reasoning, which decomposes a complex problem into subproblems to facilitate more effective exploration of the solution. Although promising, our analysis reveals a fundamental misalignment between general-purpose post-training and DAC-style inference, which limits the model’s capacity to fully leverage this potential. To bridge this gap and fully unlock LLMs’ reasoning capabilities on the most challenging tasks, we propose an end-to-end reinforcement learning (RL) framework to enhance their DAC-style reasoning capacity. At each step, the policy decomposes a problem into a group of subproblems, solves them sequentially, and addresses the original one conditioned on the subproblem solutions, with both decomposition and solution integrated into RL training. Under comparable training, our DAC-style framework endows the model with a higher performance ceiling and stronger test-time scalability, surpassing CoT by 8.6% in Pass@1 and 6.3% in Pass@32 on competition-level benchmarks.
Bib Entry
@inproceedings{jiang2026divide,
title = {Training LLMs for Divide-and-Conquer Reasoning},
author = {Liang, Xiao and Li, Zhong-Zhi and Lin, Zhenghao and Jiang, Eric Hanchen and Zhang, Hengyuan and Shen, Yelong and Chang, Kai-Wei and Wu, Ying Nian and Gong, Yeyun and Chen, Weizhu},
booktitle = {ACL},
year = {2026}
}
Related Publications
- Learning Structured Reasoning via Tractable Trajectory Control, ICML, 2026
- BRIEF-Pro: Universal Context Compression with Short-to-Long Synthesis for Fast and Accurate Multi-Hop Reasoning, ACL-Findings, 2026
- Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models, ACL-Findings, 2026
- MQuAKE-Remastered: Multi-Hop Knowledge Editing Can Only Be Advanced with Reliable Evaluations, ICLR, 2025
- Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation, ACL-Findings, 2025
- QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search, ICML, 2025
- DRS: Deep Question Reformulation With Structured Output, ACL-Findings, 2025
- V-ALPHASOCIAL: Benchmark and Self-Reflective Chain-of-Thought Generation for Visual Social Commonsense Reasoning, ACL-Findings, 2025
- VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning, CVPR, 2025
- BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning via Compression, NAACL-Finding, 2025
- QUDSELECT: Selective Decoding for Questions Under Discussion Parsing, EMNLP, 2024
- Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue, EMNLP, 2024
- LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning, EMNLP-Finding, 2024
- Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs, ACL, 2024
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data, ACL-Findings, 2024
- Can small language models help large language models reason better?: LM-guided chain-of-thought, LREC-COLING, 2024
- IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models, EMNLP-Finding, 2023