Share this page:

OpenThoughts: Data Recipes for Reasoning Models

Etash Kumar Guha, Ryan Marten, Sedrick Keh, Negin Raoof, Georgios Smyrnis, Hritik Bansal, Marianna Nezhurina, Jean Mercat, Trung Vu, Zayne Rea Sprague, Ashima Suvarna, Benjamin Feuer, Leon Liangyu Chen, Zaid Khan, Eric Frankel, and others, in ICLR, 2026.

Oral

Download the full text


Abstract

Reasoning models have made rapid progress on many benchmarks involving math, code, and science. Yet, there are still many open questions about the best training recipes for reasoning since state-of-the-art models often rely on proprietary datasets with little to no public information available. To address this, the goal of the OpenThoughts project is to create open-source datasets for training reasoning models. After initial explorations, our OpenThoughts2-1M dataset led to OpenThinker2-32B, the first model trained on public reasoning data to match DeepSeek-R1-Distill-32B on standard reasoning benchmarks such as AIME and LiveCodeBench. We then improve our dataset further by systematically investigating each step of our data generation pipeline with 1,000+ controlled experiments, which led to OpenThoughts3. Scaling the pipeline to 1.2M examples and using QwQ-32B as teacher yields our OpenThinker3-7B model, which achieves state-ofthe-art results: 53% on AIME 2025, 51% on LiveCodeBench 06/24-01/25, and 54% on GPQA Diamond ¡V improvements of 15.3, 17.2, and 20.5 percentage points compared to the DeepSeek-R1-Distill-Qwen-7B.


Bib Entry

@inproceedings{guha2026openthoughts,
  title = {OpenThoughts: Data Recipes for Reasoning Models},
  author = {Guha, Etash Kumar and Marten, Ryan and Keh, Sedrick and Raoof, Negin and Smyrnis, Georgios and Bansal, Hritik and Nezhurina, Marianna and Mercat, Jean and Vu, Trung and Sprague, Zayne Rea and Suvarna, Ashima and Feuer, Benjamin and Chen, Leon Liangyu and Khan, Zaid and Frankel, Eric and others},
  booktitle = {ICLR},
  year = {2026}
}

Related Publications

  1. OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles, NeurIPS, 2025
  2. When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning, COLM 2025, 2025
  3. MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?, ECCV, 2024
  4. MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts, ICLR, 2024