Share this page:

IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models

Haoxuan You, Rui Sun, Zhecan Wang, Long Chen, Gengyu Wang, Hammad Ayyubi, Kai-Wei Chang, and Shih-Fu Chang, in EMNLP-Finding, 2023.

Download the full text


Abstract

The field of vision-and-language (VL) understanding has made unprecedented progress with end-to-end large pre-trained VL models (VLMs). However, they still fall short in zero-shot reasoning tasks that require multi-step inferencing. To achieve this goal, previous works resort to a divide-and-conquer pipeline. In this paper, we argue that previous efforts have several inherent shortcomings: 1) They rely on domain-specific sub-question decomposing models. 2) They force models to predict the final answer even if the sub-questions or sub-answers provide insufficient information. We address these limitations via IdealGPT, a framework that iteratively decomposes VL reasoning using large language models (LLMs). Specifically, IdealGPT utilizes an LLM to generate sub-questions, a VLM to provide corresponding sub-answers, and another LLM to reason to achieve the final answer. These three modules perform the divide-and-conquer procedure iteratively until the model is confident about the final answer to the main question. We evaluate IdealGPT on multiple challenging VL reasoning tasks under a zero-shot setting. In particular, our IdealGPT outperforms the best existing GPT-4-like models by an absolute 10% on VCR and 15% on SNLI-VE.


Bib Entry

@inproceedings{you2023idealgpt,
  author = {You, Haoxuan and Sun, Rui and Wang, Zhecan and Chen, Long and Wang, Gengyu and Ayyubi, Hammad and Chang, Kai-Wei and Chang, Shih-Fu},
  booktitle = {EMNLP-Finding},
  title = {IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models},
  year = {2023}
}

Related Publications