Share this page:

Learning Structured Reasoning via Tractable Trajectory Control

Po-Nien Kung, Zhen Yang, Jeffrey Luo, Cheng-Fu Yang, Haikang Deng, Zi-Yi Dou, Yinfei Yang, Nanyun Peng, Zhe Gan, and Kai-Wei Chang, in ICML, 2026.

Spotlight (536/23,918, top 2.2%)

Download the full text


Abstract

Large language models can exhibit emergent reasoning behaviors, often manifested as recurring lexical patterns (e.g., "wait," indicating verification). However, complex reasoning trajectories remain sparse in unconstrained sampling, and standard RL often fails to guarantee the acquisition of diverse reasoning behaviors. We propose a systematic discovery and reinforcement of diverse reasoning patterns through structured reasoning, a paradigm that requires targeted exploration of specific reasoning patterns during the RL process. To this end, we propose Ctrl-R, a framework for learning structured reasoning via tractable trajectory control that actively guides the rollout process, incentivizing the exploration of diverse reasoning patterns that are critical for complex problem-solving. The resulting behavior policy enables accurate importance-sampling estimation, supporting unbiased on-policy optimization. We further introduce a power-scaling factor on the importance-sampling weights, allowing the policy to selectively learn from exploratory, out-of-distribution trajectories while maintaining stable optimization. Experiments demonstrate that Ctrl-R enables effective exploration and internalization of previously unattainable reasoning patterns, yielding consistent improvements across language and vision-language models on mathematical reasoning tasks.


Bib Entry

@inproceedings{kung2026structured,
  title = {Learning Structured Reasoning via Tractable Trajectory Control},
  author = {Kung, Po-Nien and Yang, Zhen and Luo, Jeffrey and Yang, Cheng-Fu and Deng, Haikang and Dou, Zi-Yi and Yang, Yinfei and Peng, Nanyun and Gan, Zhe and Chang, Kai-Wei},
  booktitle = {ICML},
  year = {2026}
}

Related Publications

  1. Training LLMs for Divide-and-Conquer Reasoning, ACL, 2026
  2. BRIEF-Pro: Universal Context Compression with Short-to-Long Synthesis for Fast and Accurate Multi-Hop Reasoning, ACL-Findings, 2026
  3. Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models, ACL-Findings, 2026
  4. MQuAKE-Remastered: Multi-Hop Knowledge Editing Can Only Be Advanced with Reliable Evaluations, ICLR, 2025
  5. Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation, ACL-Findings, 2025
  6. QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search, ICML, 2025
  7. DRS: Deep Question Reformulation With Structured Output, ACL-Findings, 2025
  8. V-ALPHASOCIAL: Benchmark and Self-Reflective Chain-of-Thought Generation for Visual Social Commonsense Reasoning, ACL-Findings, 2025
  9. VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning, CVPR, 2025
  10. BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning via Compression, NAACL-Finding, 2025
  11. QUDSELECT: Selective Decoding for Questions Under Discussion Parsing, EMNLP, 2024
  12. Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue, EMNLP, 2024
  13. LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning, EMNLP-Finding, 2024
  14. Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs, ACL, 2024
  15. Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data, ACL-Findings, 2024
  16. Can small language models help large language models reason better?: LM-guided chain-of-thought, LREC-COLING, 2024
  17. IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models, EMNLP-Finding, 2023