JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images
Zhecan Wang, Junzhang Liu, Chia-Wei Tang, Hani Alomari, Anushka Sivakumar, Rui Sun, Wenhao Li, Md. Atabuzzaman, Hammad Ayyubi, Haoxuan You, Alvi Md Ishmam, Kai-Wei Chang, Shih-Fu Chang, and Chris Thomas, in NeurIPS (Datasets and Benchmarks Track), 2024.
Download the full text
Abstract
Existing vision-language understanding benchmarks largely consist of images of objects in their usual contexts. As a consequence, recent multimodal large language models can perform well with only a shallow visual understanding by relying on background language biases. Thus, strong performance on these benchmarks does not necessarily correlate with strong visual understanding. In this paper, we release JourneyBench, a comprehensive human-annotated benchmark of generated images designed to assess the model’s fine-grained multimodal reasoning abilities across five tasks: complementary multimodal chain of thought, multi-image VQA, imaginary image captioning, VQA with hallucination triggers, and fine-grained retrieval with sample-specific distractors. Unlike existing benchmarks, JourneyBench explicitly requires fine-grained multimodal reasoning in unusual imaginary scenarios where language bias and holistic image gist are insufficient. We benchmark state-of-the-art models on JourneyBench and analyze performance along a number of fine-grained dimensions. Results across all five tasks show that JourneyBench is exceptionally challenging for even the best models, indicating that models’ visual reasoning abilities are not as strong as they first appear. We discuss the implications of our findings and propose avenues for further research.
Bib Entry
@inproceedings{wang2024journeybench,
title = {JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images},
author = {Wang, Zhecan and Liu, Junzhang and Tang, Chia-Wei and Alomari, Hani and Sivakumar, Anushka and Sun, Rui and Li, Wenhao and Atabuzzaman, Md. and Ayyubi, Hammad and You, Haoxuan and Ishmam, Alvi Md and Chang, Kai-Wei and Chang, Shih-Fu and Thomas, Chris},
booktitle = {NeurIPS (Datasets and Benchmarks Track)},
year = {2024}
}
Related Publications
- Where Fact Ends and Fairness Begins: Redefining AI Bias Evaluation through Cognitive Biases, EMNLP-Finding, 2025
- The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects, ACL, 2025
- The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention, EMNLP, 2024
- MACAROON: Training Vision-Language Models To Be Your Engaged Partners, EMNLP-Finding, 2024
- Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond, EMNLP-Findings, 2023
- Resolving Ambiguities in Text-to-Image Generative Models, ACL, 2023
- UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding, ACL-Finding, 2023