Share this page:

Control Large Language Models via Divide and Conquer

Bingxuan Li, Yiwei Wang, Tao Meng, Kai-Wei Chang, and Nanyun Peng, in EMNLP, 2024.

Download the full text


Abstract


Bib Entry

@inproceedings{li2024llms,
  title = {Control Large Language Models via Divide and Conquer},
  author = {Li, Bingxuan and Wang, Yiwei and Meng, Tao and Chang, Kai-Wei and Peng, Nanyun},
  booktitle = {EMNLP},
  abstrct = {This paper investigates the capability of LLMs on controllable generation with prompt-based controlling, focusing on Lexically Constrained Generation (LCG). We systematically evaluate the performance of LLMs on satisfying lexical constraints with prompt-based controlling, as well as their efficacy in downstream applications. We identified three key reasons that highlight the limitations of LLMs in LCG, including (1) position bias, where LLMs tend to satisfy constraints that appear in specific positions within the input; (2) low responsiveness to control decoding parameters, which minimally impact the performance of LLMs; and (3) struggle with handling the inherent complexity of certain constraints (e.g. compound word). We conclude that black-box LLMs face significant challenges in consistently satisfying lexical constraints with prompt-based controlling. To address this bottleneck, we introduce the Divide and Conquer Generation strategy, effective for both white-box and black-box LLMs, to enhance LLMs performance in LCG tasks, which demonstrates over 90% improvement on success rate in the most challenging LCG task. Our analysis aims to provide valuable insights into the performance of LLMs in LCG with prompt-based controlling, and our proposed strategy offers a pathway to more sophisticated and customized text generation applications.},
  year = {2024}
}

Related Publications