CASA: Causality-driven Argument Sufficiency Assessment
Xiao Liu, Yansong Feng, and Kai-Wei Chang, in NAACL, 2024.
Download the full text
Abstract
The argument sufficiency assessment task aims to determine if the premises of a given argument support its conclusion. To tackle this task, existing works often train a classifier on data annotated by humans. However, annotating data is laborious, and annotations are often inconsistent due to subjective criteria. Motivated by the definition of probability of sufficiency (PS) in the causal literature, we proposeCASA, a zero-shot causality-driven argument sufficiency assessment framework. PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent. To estimate this probability, we propose to use large language models (LLMs) to generate contexts that are inconsistent with the premise and conclusion and revise them by injecting the premise event. Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments. We further deploy CASA in a writing assistance application, and find that suggestions generated by CASA enhance the sufficiency of student-written arguments. Code and data are available at https://github.com/xxxiaol/CASA.
Bib Entry
@inproceedings{liu2024casa,
title = {CASA: Causality-driven Argument Sufficiency Assessment},
author = {Liu, Xiao and Feng, Yansong and Chang, Kai-Wei},
booktitle = {NAACL},
year = {2024}
}
Related Publications
- MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding, ICLR, 2025
- ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models, ICML, 2024
- ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-Translation, ACL, 2023
- PLUE: Language Understanding Evaluation Benchmark for Privacy Policies in English, ACL (short), 2023