Share this page:

Customize Multi-modal RAI Guardrails with Precedent-based predictions

Cheng-Fu Yang, Thanh Tran, Christos Christodoulopoulos, Weitong Ruan, Rahul Gupta, and Kai-Wei Chang, in COLM 2025, 2025.

Download the full text


Abstract

A multi-modal guardrail must effectively filter image content based on user-defined policies, identifying material that may be hateful, reinforce harmful stereotypes, contain explicit material, or spread misinformation. Deploying such guardrails in real-world applications, however, poses significant challenges. Users often require varied and highly customizable policies and typically cannot provide abundant examples for each custom policy. Consequently, an ideal guardrail should be scalable to the multiple policies and adaptable to evolving user standards with minimal retraining. Existing fine-tuning methods typically condition predictions on pre-defined policies, restricting their generalizability to new policies or necessitating extensive retraining to adapt. Conversely, training-free methods struggle with limited context lengths, making it difficult to incorporate all the policies comprehensively. To overcome these limitations, we propose to condition model’s judgment on "precedents", which are the reasoning processes of prior data points similar to the given input. By leveraging precedents instead of fixed policies, our approach greatly enhances the flexibility and adaptability of the guardrail. In this paper, we introduce a critique-revise mechanism for collecting high-quality precedents and two strategies that utilize precedents for robust prediction. Experimental results demonstrate that our approach outperforms previous methods across both few-shot and full-dataset scenarios and exhibits superior generalization to novel policies.


Bib Entry

@inproceedings{yang2025customize,
  title = {Customize Multi-modal RAI Guardrails with Precedent-based predictions},
  author = {Yang, Cheng-Fu and Tran, Thanh and Christodoulopoulos, Christos and Ruan, Weitong and Gupta, Rahul and Chang, Kai-Wei},
  booktitle = {COLM 2025},
  year = {2025}
}

Related Publications

  1. MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Knowledge Poisoning Attacks, ACL, 2026
  2. SWAN: Semantic Watermarking with Abstract Meaning Representation, ACL, 2026
  3. Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy, ACL, 2026
  4. ARES: Adaptive Red-Teaming and End-to-End Repair of Policy-Reward System, ACL, 2026
  5. Open-Domain Safety Policy Construction, EACL-Findings, 2026
  6. X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents, COLM 2025, 2025
  7. Vulnerability of LLMs to Vertically Aligned Text Manipulations, ACL, 2025
  8. Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models, CVPR, 2025
  9. Vulnerability of Large Language Models to Output Prefix Jailbreaks: Impact of Positions on Safety, NAACL-Finding, 2025
  10. SafeWorld: Geo-Diverse Safety Alignment, NeurIPS, 2024
  11. FLIRT: Feedback Loop In-context Red Teaming, EMNLP, 2024
  12. Data Advisor: Data Curation with Foresight for Safety Alignment of Large Language Models, EMNLP, 2024
  13. Prompt-Driven LLM Safeguarding via Directed Representation Optimization, ICML, 2024