Share this page:

Vulnerability of Large Language Models to Output Prefix Jailbreaks: Impact of Positions on Safety

Yiwei Wang, Muhao Chen, Nanyun Peng, and Kai-Wei Chang, in NAACL-Finding, 2025.

Download the full text


Abstract


Bib Entry

@inproceedings{wang2025vulnerability,
  title = {Vulnerability of Large Language Models to Output Prefix Jailbreaks: Impact of Positions on Safety},
  author = {Wang, Yiwei and Chen, Muhao and Peng, Nanyun and Chang, Kai-Wei},
  booktitle = {NAACL-Finding},
  year = {2025}
}

Related Publications

  1. MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Knowledge Poisoning Attacks, ACL, 2026
  2. SWAN: Semantic Watermarking with Abstract Meaning Representation, ACL, 2026
  3. Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy, ACL, 2026
  4. ARES: Adaptive Red-Teaming and End-to-End Repair of Policy-Reward System, ACL, 2026
  5. Open-Domain Safety Policy Construction, EACL-Findings, 2026
  6. Customize Multi-modal RAI Guardrails with Precedent-based predictions, COLM 2025, 2025
  7. X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents, COLM 2025, 2025
  8. Vulnerability of LLMs to Vertically Aligned Text Manipulations, ACL, 2025
  9. Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models, CVPR, 2025
  10. SafeWorld: Geo-Diverse Safety Alignment, NeurIPS, 2024
  11. FLIRT: Feedback Loop In-context Red Teaming, EMNLP, 2024
  12. Data Advisor: Data Curation with Foresight for Safety Alignment of Large Language Models, EMNLP, 2024
  13. Prompt-Driven LLM Safeguarding via Directed Representation Optimization, ICML, 2024