Data Advisor: Data Curation with Foresight for Safety Alignment of Large Language Models
Fei Wang, Ninareh Mehrabi, Palash Goyal, Rahul Gupta, Kai-Wei Chang, and Aram Galstyan, in EMNLP, 2024.
Download the full text
Abstract
Data is a crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepresented or absent aspects and low-quality datapoints. To address these problems, we propose Data Advisor, an enhanced LLM-based method for generating data that takes into account the characteristics of the desired dataset. Starting from a set of pre-defined principles in hand, Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation accordingly. Data Advisor can be easily integrated into existing data generation methods to enhance data quality and coverage. Experiments on safety alignment of three representative LLMs (i.e., Mistral, Llama2, and Falcon) demonstrate the effectiveness of Data Advisor in enhancing model safety against various fine-grained safety issues without sacrificing model utility.
Bib Entry
@inproceedings{wang2024data,
title = {Data Advisor: Data Curation with Foresight for Safety Alignment of Large Language Models},
author = {Wang, Fei and Mehrabi, Ninareh and Goyal, Palash and Gupta, Rahul and Chang, Kai-Wei and Galstyan, Aram},
booktitle = {EMNLP},
year = {2024}
}
Related Publications
- Open-Domain Safety Policy Construction, EACL-Findings, 2026
- Customize Multi-modal RAI Guardrails with Precedent-based predictions, COLM 2025, 2025
- X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents, COLM 2025, 2025
- Vulnerability of LLMs to Vertically Aligned Text Manipulations, ACL, 2025
- Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models, CVPR, 2025
- Vulnerability of Large Language Models to Output Prefix Jailbreaks: Impact of Positions on Safety, NAACL-Finding, 2025
- SafeWorld: Geo-Diverse Safety Alignment, NeurIPS, 2024
- FLIRT: Feedback Loop In-context Red Teaming, EMNLP, 2024
- Prompt-Driven LLM Safeguarding via Directed Representation Optimization, ICML, 2024