Share this page:

InsideOut: Measuring and Mitigating Insider-Outsider Bias in Interview Script Generation

Yixin Wan, Xingrun Chen, and Kai-Wei Chang, in ACL, 2026.

Download the full text


Abstract

Advancements in Large language models (LLMs) have enabled a variety of downstream applications like story and interview script generation. However, recent research raised concerns about culture-related fairness issues in LLM-generated content. In this work, we identify and systematically investigate LLMs’ insider-outsider bias, a phenomenon where models position themselves as ’insiders’ of mainstream cultures during generation while externalizing less dominant cultures. We propose the InsideOut benchmark with 4,000 generation prompts and three evaluation metrics to quantify this bias through a culturally situated interview script generation task, in which an LLM is positioned as a reporter interviewing local people across 10 diverse cultures. Empirical evaluation on 5 state-of-the-art LLMs reveals that while models adopt insider tones in over 88% US-contexted scripts on average, they disproportionately default to ’outsider’ stances for non-Western cultures. To mitigate these biases, we propose 2 inference-time methods: a baseline prompt-based Fairness Intervention Pillars (FIP) method, and a structured Mitigation via Fairness Agents (MFA) framework consisting of a Single-Agent (MFA-SA), a Hierarchical-Agent (MFA-HA), and an autonomous Agentic Planning (MFA-Plan) pipeline. Empirical results demonstrate that agent-based MFA methods achieve outstanding and robust performance in mitigating the insider-outsider bias: For instance, on the Cultural Alignment Gap (CAG) metric, MFA-SA reduces bias in Llama model by 89.70 % and MFA-HA mitigates bias in Qwen by 82.54%. These findings showcase the effectiveness of agent-based methods as a promising direction for mitigating biases in generative LLMs.


Bib Entry

@inproceedings{wan2026insideout,
  title = {InsideOut: Measuring and Mitigating Insider-Outsider Bias in Interview Script Generation},
  author = {Wan, Yixin and Chen, Xingrun and Chang, Kai-Wei},
  booktitle = {ACL},
  year = {2026}
}

Related Publications

  1. White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs, ACL, 2025
  2. A Meta-Evaluation of Measuring LLM Misgendering, COLM 2025, 2025
  3. Controllable Generation via Locally Constrained Resampling, ICLR, 2025
  4. On Localizing and Deleting Toxic Memories in Large Language Models, NAACL-Finding, 2025
  5. Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification, EMNLP-Finding, 2024
  6. Mitigating Bias for Question Answering Models by Tracking Bias Influence, NAACL, 2024
  7. Are you talking to ['xem'] or ['x', 'em']? On Tokenization and Addressing Misgendering in LLMs with Pronoun Tokenization Parity, NAACL-Findings, 2024
  8. The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks, ACL (short), 2023
  9. Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems, EMNLP-Finding, 2023
  10. Kelly is a Warm Person, Joseph is a Role Model: Gender Biases in LLM-Generated Reference Letters, EMNLP-Findings, 2023
  11. Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness, AIES, 2023
  12. How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?, EMNLP (Short), 2022
  13. On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations, ACL (short), 2022
  14. "Nice Try, Kiddo": Investigating Ad Hominems in Dialogue Responses, NAACL, 2021
  15. Societal Biases in Language Generation: Progress and Challenges, ACL, 2021
  16. BOLD: Dataset and metrics for measuring biases in open-ended language generation, FAccT, 2021
  17. Towards Controllable Biases in Language Generation, EMNLP-Finding, 2020
  18. The Woman Worked as a Babysitter: On Biases in Language Generation, EMNLP (short), 2019