Share this page:

Are you talking to [’xem’] or [’x’, ’em’]? On Tokenization and Addressing Misgendering in LLMs with Pronoun Tokenization Parity

Anaelia Ovalle, Ninareh Mehrabi, Palash Goyal, Jwala Dhamala, Kai-Wei Chang, Richard Zemel, Aram Galstyan, Yuval Pinter, and Rahul Gupta, in NAACL-Findings, 2024.

Download the full text


Abstract

Gender-inclusive NLP research has documented the harmful limitations of gender binary-centric large language models (LLM), such as the inability to correctly use gender-diverse English neopronouns (e.g., xe, zir, fae). While data scarcity is a known culprit, the precise mechanisms through which scarcity affects this behavior remain underexplored. We discover LLM misgendering is significantly influenced by Byte-Pair Encoding (BPE) tokenization, the tokenizer powering many popular LLMs. Unlike binary pronouns, BPE overfragments neopronouns, a direct consequence of data scarcity during tokenizer training. This disparate tokenization mirrors tokenizer limitations observed in multilingual and low-resource NLP, unlocking new misgendering mitigation strategies. We propose two techniques: (1) pronoun tokenization parity, a method to enforce consistent tokenization across gendered pronouns, and (2) utilizing pre-existing LLM pronoun knowledge to improve neopronoun proficiency. Our proposed methods outperform finetuning with standard BPE, improving neopronoun accuracy from 14.1% to 58.4%. Our paper is the first to link LLM misgendering to tokenization and deficient neopronoun grammar, indicating that LLMs unable to correctly treat neopronouns as pronouns are more prone to misgender.


Bib Entry

@inproceedings{ovalle2024are,
  title = {Are you talking to ['xem'] or ['x', 'em']? On Tokenization and Addressing Misgendering in LLMs with Pronoun Tokenization Parity},
  author = {Ovalle, Anaelia and Mehrabi, Ninareh and Goyal, Palash and Dhamala, Jwala and Chang, Kai-Wei and Zemel, Richard and Galstyan, Aram and Pinter, Yuval and Gupta, Rahul},
  booktitle = {NAACL-Findings},
  year = {2024}
}

Related Publications

  1. InsideOut: Measuring and Mitigating Insider-Outsider Bias in Interview Script Generation, ACL, 2026
  2. White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs, ACL, 2025
  3. A Meta-Evaluation of Measuring LLM Misgendering, COLM 2025, 2025
  4. Controllable Generation via Locally Constrained Resampling, ICLR, 2025
  5. On Localizing and Deleting Toxic Memories in Large Language Models, NAACL-Finding, 2025
  6. Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification, EMNLP-Finding, 2024
  7. Mitigating Bias for Question Answering Models by Tracking Bias Influence, NAACL, 2024
  8. The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks, ACL (short), 2023
  9. Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems, EMNLP-Finding, 2023
  10. Kelly is a Warm Person, Joseph is a Role Model: Gender Biases in LLM-Generated Reference Letters, EMNLP-Findings, 2023
  11. Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness, AIES, 2023
  12. How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?, EMNLP (Short), 2022
  13. On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations, ACL (short), 2022
  14. "Nice Try, Kiddo": Investigating Ad Hominems in Dialogue Responses, NAACL, 2021
  15. Societal Biases in Language Generation: Progress and Challenges, ACL, 2021
  16. BOLD: Dataset and metrics for measuring biases in open-ended language generation, FAccT, 2021
  17. Towards Controllable Biases in Language Generation, EMNLP-Finding, 2020
  18. The Woman Worked as a Babysitter: On Biases in Language Generation, EMNLP (short), 2019