Share this page:

Agent Lumos: Unified and Modular Training for Open-Source Language Agents

Da Yin, Faeze Brahman, Abhilasha Ravichander, Khyathi Chandu, Kai-Wei Chang, Yejin Choi, and Bill Yuchen Lin, in ACL, 2024.

Download the full text


Abstract


Bib Entry

@inproceedings{yin2024agent,
  title = {Agent Lumos: Unified and Modular Training for Open-Source Language Agents},
  author = {Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
  booktitle = {ACL},
  abstrct = {Closed-source agents suffer from several issues such as a lack of affordability, transparency, and reproducibility, particularly on complex interactive tasks. This motivates the development of open-source alternatives. We introduce LUMOS, one of the first frameworks for training open-source LLM-based agents. LUMOS features a learnable, unified, and modular architecture with a planning module that learns high-level subgoal generation, and a grounding module trained to translate these into actions using various tools in the execution module. The design allows for modular upgrades and wider applicability to diverse interactive tasks. To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks. On 9 datasets, LUMOS exhibits several key advantages: (1) LUMOS excels multiple larger open-source agents on the held-out datasets (unused for training) for each task type. LUMOS even surpasses GPT agents on QA and web tasks; (2) LUMOS outperforms open-source agents produced by chain-of-thoughts and unmodularized integrated training; and (3) LUMOS effectively generalizes to unseen tasks, outperforming 33B-scale agents and domain-specific agents.},
  year = {2024}
}

Related Publications

  1. Control Large Language Models via Divide and Conquer, EMNLP, 2024
  2. Re-ReST: Reflection-Reinforced Self-Training for Language Agents, EMNLP, 2024
  3. Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension, ICML, 2024
  4. TrustLLM: Trustworthiness in Large Language Models, ICML, 2024
  5. The steerability of large language models toward data-driven personas, NAACL, 2024
  6. AI-Assisted Summarization of Radiologic Reports: Evaluating GPT3davinci, BARTcnn, LongT5booksum, LEDbooksum, LEDlegal, and LEDclinical, American Journal of Neuroradiology, 2024
  7. Understanding and Mitigating Spurious Correlations in Text Classification with Neighborhood Analysis, EACL-Findings, 2024
  8. Few-Shot Representation Learning for Out-Of-Vocabulary Words, ACL, 2019
  9. Learning Word Embeddings for Low-resource Languages by PU Learning, NAACL, 2018
  10. Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment, IJCAI, 2018
  11. Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context, ACL RepL4NLP Workshop, 2017