Understanding and Mitigating Spurious Correlations in Text Classification with Neighborhood Analysis
Oscar Chew, Hsuan-Tien Lin, Kai-Wei Chang, and Kuan-Hao Huang, in EACL-Findings, 2024.
Download the full text
Abstract
Recent research has revealed that machine learning models have a tendency to leverage spurious correlations that exist in the training set but may not hold true in general circumstances. For instance, a sentiment classifier may erroneously learn that the token "performances" is commonly associated with positive movie reviews. Relying on these spurious correlations degrades the classifiers performance when it deploys on out-of-distribution data. In this paper, we examine the implications of spurious correlations through a novel perspective called neighborhood analysis. The analysis uncovers how spurious correlations lead unrelated words to erroneously cluster together in the embedding space. Driven by the analysis, we design a metric to detect spurious tokens and also propose a family of regularization methods, NFL (doN’t Forget your Language) to mitigate spurious correlations in text classification. Experiments show that NFL can effectively prevent erroneous clusters and significantly improve the robustness of classifiers without auxiliary data. The code is publicly available at https://github.com/oscarchew/doNt-Forget-your-Language.
Bib Entry
@inproceedings{chew2024understanding,
title = {Understanding and Mitigating Spurious Correlations in Text Classification with Neighborhood Analysis},
author = {Chew, Oscar and Lin, Hsuan-Tien and Chang, Kai-Wei and Huang, Kuan-Hao},
booktitle = {EACL-Findings},
year = {2024}
}
Related Publications
- Control Large Language Models via Divide and Conquer, EMNLP, 2024
- Re-ReST: Reflection-Reinforced Self-Training for Language Agents, EMNLP, 2024
- Agent Lumos: Unified and Modular Training for Open-Source Language Agents, ACL, 2024
- Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension, ICML, 2024
- TrustLLM: Trustworthiness in Large Language Models, ICML, 2024
- The steerability of large language models toward data-driven personas, NAACL, 2024
- AI-Assisted Summarization of Radiologic Reports: Evaluating GPT3davinci, BARTcnn, LongT5booksum, LEDbooksum, LEDlegal, and LEDclinical, American Journal of Neuroradiology, 2024
- Few-Shot Representation Learning for Out-Of-Vocabulary Words, ACL, 2019
- Learning Word Embeddings for Low-resource Languages by PU Learning, NAACL, 2018
- Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment, IJCAI, 2018
- Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context, ACL RepL4NLP Workshop, 2017