Share this page:

Enhancing Large Vision Language Models with Self-Training on Image Comprehension

Yihe Deng, Pan Lu, Fan Yin, Ziniu Hu, Sheng Shen, Quanquan Gu, James Zou, Kai-Wei Chang, and Wei Wang, in NeurIPS, 2024.

Download the full text


Abstract

Large vision language models (LVLMs) integrate large language models (LLMs) with pre-trained vision encoders, thereby activating the perception capability of the model to understand image inputs for different queries and conduct subsequent reasoning. Improving this capability requires high-quality vision-language data, which is costly and labor-intensive to acquire. Self-training approaches have been effective in single-modal settings to alleviate the need for labeled data by leveraging model’s own generation. However, effective self-training remains a challenge regarding the unique visual perception and reasoning capability of LVLMs. To address this, we introduce Self-Training on Image Comprehension (STIC), which emphasizes a self-training approach specifically for image comprehension. First, the model self-constructs a preference dataset for image descriptions using unlabeled images. Preferred responses are generated through a step-by-step prompt, while dis-preferred responses are generated from either corrupted images or misleading prompts. To further self-improve reasoning on the extracted visual information, we let the model reuse a small portion of existing instruction-tuning data and append its self-generated image descriptions to the prompts. We validate the effectiveness of STIC across seven different benchmarks, demonstrating substantial performance gains of 4.0% on average while using 70% less supervised fine-tuning data than the current method. Further studies investigate various components of STIC and highlight its potential to leverage vast quantities of unlabeled images for self-training. Code and data are made publicly available.


Bib Entry

@inproceedings{deng2024enhancing,
  title = {Enhancing Large Vision Language Models with Self-Training on Image Comprehension},
  author = {Deng, Yihe and Lu, Pan and Yin, Fan and Hu, Ziniu and Shen, Sheng and Gu, Quanquan and Zou, James and Chang, Kai-Wei and Wang, Wei},
  booktitle = {NeurIPS},
  year = {2024}
}

Related Publications

  1. HoneyBee: Data Recipes for Vision-Language Reasoners, CVPR, 2026
  2. MotionEdit: Benchmarking and Learning Motion-Centric Image Editing, CVPR, 2026
  3. LaViDa: A Large Diffusion Language Model for Multimodal Understanding, NeurIPS, 2025
  4. PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding, NeurIPS, 2025
  5. STIV: Scalable Text and Image Conditioned Video Generation, ICCV, 2025
  6. Verbalized Representation Learning for Interpretable Few-Shot Generalization, ICCV, 2025
  7. Contrastive Visual Data Augmentation, ICML, 2025
  8. SYNTHIA: Novel Concept Design with Affordance Composition, ACL, 2025
  9. SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation, ICLR, 2025
  10. Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation, Nature Communications, 2025
  11. CoBIT: A Contrastive Bi-directional Image-Text Generation Model, ICLR, 2024
  12. DesCo: Learning Object Recognition with Rich Language Descriptions, NeurIPS, 2023
  13. "What's 'up' with vision-language models? Investigating their struggle to understand spatial relations.", EMNLP, 2023
  14. Text Encoders are Performance Bottlenecks in Contrastive Vision-Language Models, EMNLP, 2023
  15. MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models, ACL (short), 2023
  16. REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge, CVPR, 2023
  17. Grounded Language-Image Pre-training, CVPR, 2022
  18. How Much Can CLIP Benefit Vision-and-Language Tasks?, ICLR, 2022