Share this page:

Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness

Anaelia Ovalle, Arjun Subramonian, Vagrant Gautam, Gilbert Gee, and Kai-Wei Chang, in AIES, 2023.

Download the full text


Abstract

Intersectionality is a critical framework that, through inquiry and praxis, allows us to examine how social inequalities persist through domains of structure and discipline. Given AI fairness’ raison detre of "fairness," we argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness. Through a critical review of how intersectionality is discussed in 30 papers from the AI fairness literature, we deductively and inductively: 1) map how intersectionality tenets operate within the AI fairness paradigm and 2) uncover gaps between the conceptualization and operationalization of intersectionality. We find that researchers overwhelmingly reduce intersectionality to optimizing for fairness metrics over demographic subgroups. They also fail to discuss their social context and when mentioning power, they mostly situate it only within the AI pipeline. We: 3) outline and assess the implications of these gaps for critical inquiry and praxis, and 4) provide actionable recommendations for AI fairness researchers to engage with intersectionality in their work by grounding it in AI epistemology


Bib Entry

@inproceedings{ovalle2023factoring,
  title = {Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness},
  author = {Ovalle, Anaelia and Subramonian, Arjun and Gautam, Vagrant and Gee, Gilbert and Chang, Kai-Wei},
  year = {2023},
  booktitle = {AIES}
}

Related Publications

  1. A Meta-Evaluation of Measuring LLM Misgendering, COLM 2025, 2025
  2. White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs, ACL, 2025
  3. Controllable Generation via Locally Constrained Resampling, ICLR, 2025
  4. On Localizing and Deleting Toxic Memories in Large Language Models, NAACL-Finding, 2025
  5. Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification, EMNLP-Finding, 2024
  6. Mitigating Bias for Question Answering Models by Tracking Bias Influence, NAACL, 2024
  7. Are you talking to ['xem'] or ['x', 'em']? On Tokenization and Addressing Misgendering in LLMs with Pronoun Tokenization Parity, NAACL-Findings, 2024
  8. Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems, EMNLP-Finding, 2023
  9. Kelly is a Warm Person, Joseph is a Role Model: Gender Biases in LLM-Generated Reference Letters, EMNLP-Findings, 2023
  10. The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks, ACL (short), 2023
  11. How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?, EMNLP (Short), 2022
  12. On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations, ACL (short), 2022
  13. Societal Biases in Language Generation: Progress and Challenges, ACL, 2021
  14. "Nice Try, Kiddo": Investigating Ad Hominems in Dialogue Responses, NAACL, 2021
  15. BOLD: Dataset and metrics for measuring biases in open-ended language generation, FAccT, 2021
  16. Towards Controllable Biases in Language Generation, EMNLP-Finding, 2020
  17. The Woman Worked as a Babysitter: On Biases in Language Generation, EMNLP (short), 2019