Share this page:

Kelly is a Warm Person, Joseph is a Role Model: Gender Biases in LLM-Generated Reference Letters

Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng, in EMNLP-Findings, 2023.

Download the full text


Abstract

As generative language models advance, users have started to utilize Large Language Models (LLMs) to assist in writing various types of content, including professional documents such as recommendation letters. Despite their convenience, these applications introduce unprecedented fairness concerns. As generated reference letter might be directly utilized by users in professional or academic scenarios, it has the potential to cause direct harm such as lowering success rates for female applicants. Therefore, it is imminent and necessary to comprehensively study fairness issues and associated harms in such real-world use cases for future mitigation and monitoring. In this paper, we critically examine gender bias in LLM-generated reference letters. Inspired by findings in social science, we specifically design evaluation methods to manifest gender biases in LLM-generated letters through two dimensions: biases in language style and biases in lexical content. Furthermore, we investigate the extent of bias propagation by separately analyze bias amplification in model-hallucinated contents, which we define to be hallucination bias of model-generated documents. Through benchmarking evaluation on 4 popular LLMs, including ChatGPT, Alpaca, Vicuna and StableLM, our study reveal significant gender biases in LLM-generated recommendation letters. Our findings further point towards the importance and imminence to recognize bias in LLM-generated professional documents.


Bib Entry

@inproceedings{wan2023kelly,
  title = {Kelly is a Warm Person, Joseph is a Role Model: Gender Biases in LLM-Generated Reference Letters},
  author = {Wan, Yixin and Pu, George and Sun, Jiao and Garimella, Aparna and Chang, Kai-Wei and Peng, Nanyun},
  booktitle = {EMNLP-Findings},
  year = {2023}
}

Related Publications

  1. A Meta-Evaluation of Measuring LLM Misgendering, COLM 2025, 2025
  2. White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs, ACL, 2025
  3. Controllable Generation via Locally Constrained Resampling, ICLR, 2025
  4. On Localizing and Deleting Toxic Memories in Large Language Models, NAACL-Finding, 2025
  5. Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification, EMNLP-Finding, 2024
  6. Mitigating Bias for Question Answering Models by Tracking Bias Influence, NAACL, 2024
  7. Are you talking to ['xem'] or ['x', 'em']? On Tokenization and Addressing Misgendering in LLMs with Pronoun Tokenization Parity, NAACL-Findings, 2024
  8. Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems, EMNLP-Finding, 2023
  9. The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks, ACL (short), 2023
  10. Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness, AIES, 2023
  11. How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?, EMNLP (Short), 2022
  12. On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations, ACL (short), 2022
  13. Societal Biases in Language Generation: Progress and Challenges, ACL, 2021
  14. "Nice Try, Kiddo": Investigating Ad Hominems in Dialogue Responses, NAACL, 2021
  15. BOLD: Dataset and metrics for measuring biases in open-ended language generation, FAccT, 2021
  16. Towards Controllable Biases in Language Generation, EMNLP-Finding, 2020
  17. The Woman Worked as a Babysitter: On Biases in Language Generation, EMNLP (short), 2019