Share this page:

Prompt-Driven LLM Safeguarding via Directed Representation Optimization

Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie Zhou, Kai-Wei Chang, Minlie Huang, and Nanyun Peng, in ICML, 2024.

Download the full text


Abstract

Prepending model inputs with safety prompts is a common practice for safeguarding large language models (LLMs) against queries with harmful intents. However, the underlying working mechanisms of safety prompts have not been unraveled yet, restricting the possibility of automatically optimizing them to improve LLM safety. In this work, we investigate how LLMs’ behavior (i.e., complying with or refusing user queries) is affected by safety prompts from the perspective of model representation. We find that in the representation space, the input queries are typically moved by safety prompts in a "higher-refusal" direction, in which models become more prone to refusing to provide assistance, even when the queries are harmless. On the other hand, LLMs are naturally capable of distinguishing harmful and harmless queries without safety prompts. Inspired by these findings, we propose a method for safety prompt optimization, namely DRO (Directed Representation Optimization). Treating a safety prompt as continuous, trainable embeddings, DRO learns to move the queries’ representations along or opposite the refusal direction, depending on their harmfulness. Experiments with eight LLMs on out-of-domain and jailbreak benchmarks demonstrate that DRO remarkably improves the safeguarding performance of human-crafted safety prompts, without compromising the models’ general performance.


Bib Entry

@inproceedings{zheng2024prompt,
  title = {Prompt-Driven LLM Safeguarding via Directed Representation Optimization},
  author = {Zheng, Chujie and Yin, Fan and Zhou, Hao and Meng, Fandong and Zhou, Jie and Chang, Kai-Wei and Huang, Minlie and Peng, Nanyun},
  year = {2024},
  booktitle = {ICML}
}

Related Publications

  1. MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Knowledge Poisoning Attacks, ACL, 2026
  2. SWAN: Semantic Watermarking with Abstract Meaning Representation, ACL, 2026
  3. Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy, ACL, 2026
  4. ARES: Adaptive Red-Teaming and End-to-End Repair of Policy-Reward System, ACL, 2026
  5. Open-Domain Safety Policy Construction, EACL-Findings, 2026
  6. Customize Multi-modal RAI Guardrails with Precedent-based predictions, COLM 2025, 2025
  7. X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents, COLM 2025, 2025
  8. Vulnerability of LLMs to Vertically Aligned Text Manipulations, ACL, 2025
  9. Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models, CVPR, 2025
  10. Vulnerability of Large Language Models to Output Prefix Jailbreaks: Impact of Positions on Safety, NAACL-Finding, 2025
  11. SafeWorld: Geo-Diverse Safety Alignment, NeurIPS, 2024
  12. FLIRT: Feedback Loop In-context Red Teaming, EMNLP, 2024
  13. Data Advisor: Data Curation with Foresight for Safety Alignment of Large Language Models, EMNLP, 2024