Share this page:

LUME: LLM Unlearning with Multitask Evaluations

Anil Ramakrishna, Yixin Wan, Xiaomeng Jin, Kai-Wei Chang, Zhiqi Bu, Bhanukiran Vinzamuri, Volkan Cevher, Mingyi Hong, and Rahul Gupta, in EMNLP-Finding, 2025.

Code

Download the full text


Abstract

Unlearning aims to remove copyrighted, sensitive, or private content from large language models without full retraining. This paper introduces LUME, a multi-task unlearning benchmark with three tasks: unlearning synthetically generated creative short novels, unlearning synthetic biographies with sensitive information, and unlearning a collection of public biographies. The authors release two fine-tuned language models (1B and 7B parameters) as target models and conduct detailed evaluations of several unlearning algorithms, presenting results on carefully crafted metrics to understand their behavior and limitations.


Bib Entry

@inproceedings{ramakrishna2025lume,
  title = {LUME: LLM Unlearning with Multitask Evaluations},
  author = {Ramakrishna, Anil and Wan, Yixin and Jin, Xiaomeng and Chang, Kai-Wei and Bu, Zhiqi and Vinzamuri, Bhanukiran and Cevher, Volkan and Hong, Mingyi and Gupta, Rahul},
  booktitle = {EMNLP-Finding},
  year = {2025}
}

Related Publications