Share this page:

V-ALPHASOCIAL: Benchmark and Self-Reflective Chain-of-Thought Generation for Visual Social Commonsense Reasoning

Zongyu Lin, Zhikun Xu, Xiaohan Song, Yixin Wan, Xingcheng Yao, Tsung-Han Lin, Selina Song, Pranav Subbaraman, Ben Zhou, Kai-Wei Chang, and Yizhou Sun, in ACL-Findings, 2025.

Download the full text


Abstract

Social commonsense reasoning naturally involves both the verbal and non-verbal cues of a social interaction. It is important for Large Vision-Language Models (VLMs) to leverage both textual and visual information in performing tasks like social understanding and reasoning. However, while current LLMs have shown good social reasoning capabilities in textual context, whether they can effectively incorporate visual information in social comprehension remains under-explored. To narrow the gap, we first construct and propose a benchmark: V-Social, featuring well-aligned text and visual content, tailored to assess visual social commonsense for multimodal foundation models. Through experimenting with V-Social, we find that even the most advanced VLM, GPT-4o, often falls short in social commonsense reasoning. This highlights the critical need to enhance the social grounding of VLMs. One major obstacle for improving this is the lack of high-quality data with good reasoning process. To overcome this obstacle, we introduce V-AlphaSocial, a novel method that generates high-quality chain-of-thought reasoning paths from unlabeled data. We design a visual reasoning reward model to improve VLM, and then iteratively refine both the VLM and the reward model. Our extensive analysis showcases how our method enhances social commonsense reasoning, proposing an effective approach that facilitates deeper exploration into field.


Bib Entry

@inproceedings{lin2025valphasocial,
  title = {V-ALPHASOCIAL: Benchmark and Self-Reflective Chain-of-Thought Generation for Visual Social Commonsense Reasoning},
  author = {Lin, Zongyu and Xu, Zhikun and Song, Xiaohan and Wan, Yixin and Yao, Xingcheng and Lin, Tsung-Han and Song, Selina and Subbaraman, Pranav and Zhou, Ben and Chang, Kai-Wei and Sun, Yizhou},
  booktitle = {ACL-Findings},
  year = {2025}
}

Related Publications

  1. Learning Structured Reasoning via Tractable Trajectory Control, ICML, 2026
  2. Training LLMs for Divide-and-Conquer Reasoning, ACL, 2026
  3. BRIEF-Pro: Universal Context Compression with Short-to-Long Synthesis for Fast and Accurate Multi-Hop Reasoning, ACL-Findings, 2026
  4. Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models, ACL-Findings, 2026
  5. MQuAKE-Remastered: Multi-Hop Knowledge Editing Can Only Be Advanced with Reliable Evaluations, ICLR, 2025
  6. Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation, ACL-Findings, 2025
  7. QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search, ICML, 2025
  8. DRS: Deep Question Reformulation With Structured Output, ACL-Findings, 2025
  9. VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning, CVPR, 2025
  10. BRIEF: Bridging Retrieval and Inference for Multi-hop Reasoning via Compression, NAACL-Finding, 2025
  11. QUDSELECT: Selective Decoding for Questions Under Discussion Parsing, EMNLP, 2024
  12. Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue, EMNLP, 2024
  13. LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning, EMNLP-Finding, 2024
  14. Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs, ACL, 2024
  15. Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data, ACL-Findings, 2024
  16. Can small language models help large language models reason better?: LM-guided chain-of-thought, LREC-COLING, 2024
  17. IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models, EMNLP-Finding, 2023