Share this page:

Controllable Generation via Locally Constrained Resampling

Kareem Ahmed, Kai-Wei Chang, and Guy Van den Broeck, in ICLR, 2025.

Download the full text


Abstract

Autoregressive models have demonstrated an unprecedented ability at modeling the intricacies of natural language. However, they continue to struggle with generating complex outputs that adhere to logical constraints. Sampling from a fully-independent distribution subject to a constraint is hard. Sampling from an autoregressive distribution subject to a constraint is doubly hard: We have to contend not only with the hardness of the constraint but also the distribution’s lack of structure. We propose a tractable probabilistic approach that performs Bayesian conditioning to draw samples subject to a constraint. Our approach considers the entire sequence, leading to a more globally optimal constrained generation than current greedy methods. Starting from a model sample, we induce a local, factorized distribution which we can tractably condition on the constraint. To generate samples that satisfy the constraint, we sample from the conditional distribution, correct for biases in the samples and resample. The resulting samples closely approximate the target distribution and are guaranteed to satisfy the constraints. We evaluate our approach on several tasks, including LLM detoxification and solving Sudoku puzzles. We show that by disallowing a list of toxic expressions our approach is able to steer the model’s outputs away from toxic generations, outperforming similar approaches to detoxification. We conclude by showing that our approach achieves a perfect accuracy on Sudoku compared to <50% for GPT4-o and Gemini 1.5.


Bib Entry

@inproceedings{ahmed2025controllable,
  title = {Controllable Generation via Locally Constrained Resampling},
  author = {Ahmed, Kareem and Chang, Kai-Wei and den Broeck, Guy Van},
  booktitle = {ICLR},
  year = {2025}
}

Related Publications

  1. A Meta-Evaluation of Measuring LLM Misgendering, COLM 2025, 2025
  2. White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs, ACL, 2025
  3. On Localizing and Deleting Toxic Memories in Large Language Models, NAACL-Finding, 2025
  4. Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification, EMNLP-Finding, 2024
  5. Mitigating Bias for Question Answering Models by Tracking Bias Influence, NAACL, 2024
  6. Are you talking to ['xem'] or ['x', 'em']? On Tokenization and Addressing Misgendering in LLMs with Pronoun Tokenization Parity, NAACL-Findings, 2024
  7. Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems, EMNLP-Finding, 2023
  8. Kelly is a Warm Person, Joseph is a Role Model: Gender Biases in LLM-Generated Reference Letters, EMNLP-Findings, 2023
  9. The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks, ACL (short), 2023
  10. Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness, AIES, 2023
  11. How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?, EMNLP (Short), 2022
  12. On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations, ACL (short), 2022
  13. Societal Biases in Language Generation: Progress and Challenges, ACL, 2021
  14. "Nice Try, Kiddo": Investigating Ad Hominems in Dialogue Responses, NAACL, 2021
  15. BOLD: Dataset and metrics for measuring biases in open-ended language generation, FAccT, 2021
  16. Towards Controllable Biases in Language Generation, EMNLP-Finding, 2020
  17. The Woman Worked as a Babysitter: On Biases in Language Generation, EMNLP (short), 2019