Share this page:

Unlearning as Multi-task Optimization: A Normalized Gradient Difference Approach with an Adaptive Learning Rate

Xiaomeng Jin, Zhiqi Bu, Bhanukiran Vinzamuri, Anil Ramakrishna, Kai-Wei Chang, Volkan Cevher, and Mingyi Hong, in NAACL, 2025.

Download the full text


Abstract


Bib Entry

@inproceedings{jin2025unlearning,
  title = {Unlearning as Multi-task Optimization: A Normalized Gradient Difference Approach with an Adaptive Learning Rate},
  author = {Jin, Xiaomeng and Bu, Zhiqi and Vinzamuri, Bhanukiran and Ramakrishna, Anil and Chang, Kai-Wei and Cevher, Volkan and Hong, Mingyi},
  booktitle = {NAACL},
  year = {2025}
}

Related Publications

  1. BLUR: A Bi-Level Optimization Approach for LLM Unlearning, EACL, 2026
  2. Not Every Token Needs Forgetting: Selective Unlearning to Limit Change in Utility in Large Language Model Unlearning, EMNLP-Finding, 2025
  3. LUME: LLM Unlearning with Multitask Evaluations, EMNLP-Finding, 2025