Share this page:

Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond

Zhecan Wang, Long Chen, Haoxuan You, Keyang Xu, Noel C. Codella, Kai-Wei Chang, and Shih-Fu Chang, in EMNLP-Findings, 2023.

Download the full text


Abstract

Vision-language (VL) understanding tasks evaluate models’ comprehension of complex visual scenes through multiple-choice questions. However, we have identified two dataset biases that models can exploit as shortcuts to resolve various VL tasks correctly without proper understanding. The first type of dataset bias is Unbalanced Matching bias, where the correct answer overlaps the question and image more than the incorrect answers. The second type of dataset bias is Distractor Similarity bias, where incorrect answers are overly dissimilar to the correct answer but significantly similar to other incorrect answers within the same sample. To address these dataset biases, we first propose Adversarial Data Synthesis (ADS) to generate synthetic training and debiased evaluation data. We then introduce Intra-sample Counterfactual Training (ICT) to assist models in utilizing the synthesized training data, particularly the counterfactual data, via focusing on intra-sample differentiation. Extensive experiments demonstrate the effectiveness of ADS and ICT in consistently improving model performance across different benchmarks, even in domain-shifted scenarios.


Bib Entry

@inproceedings{wang2023datasetbias,
  title = {Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond},
  author = {Wang, Zhecan and Chen, Long and You, Haoxuan and Xu, Keyang and Codella, Noel C and Chang, Kai-Wei and Chang, Shih-Fu},
  booktitle = {EMNLP-Findings},
  year = {2023}
}

Related Publications

  1. Where Fact Ends and Fairness Begins: Redefining AI Bias Evaluation through Cognitive Biases, EMNLP-Finding, 2025
  2. The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects, ACL, 2025
  3. JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images, NeurIPS (Datasets and Benchmarks Track), 2024
  4. The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention, EMNLP, 2024
  5. MACAROON: Training Vision-Language Models To Be Your Engaged Partners, EMNLP-Finding, 2024
  6. Resolving Ambiguities in Text-to-Image Generative Models, ACL, 2023
  7. UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding, ACL-Finding, 2023