MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Knowledge Poisoning Attacks
Hyeonjeong Ha, Qiusi Zhan, Jeonghwan Kim, Dimitrios Bralios, Saikrishna Sanniboina, Nanyun Peng, Kai-Wei Chang, Daniel Kang, and Heng Ji, in ACL, 2026.
Abstract
Bib Entry
@inproceedings{ha2026mmpoisonrag,
title = {MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Knowledge Poisoning Attacks},
author = {Ha, Hyeonjeong and Zhan, Qiusi and Kim, Jeonghwan and Bralios, Dimitrios and Sanniboina, Saikrishna and Peng, Nanyun and Chang, Kai-Wei and Kang, Daniel and Ji, Heng},
booktitle = {ACL},
year = {2026}
}
Related Publications
-
SWAN: Semantic Watermarking with Abstract Meaning Representation, ACL, 2026
-
Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy, ACL, 2026
-
ARES: Adaptive Red-Teaming and End-to-End Repair of Policy-Reward System, ACL, 2026
-
Open-Domain Safety Policy Construction, EACL-Findings, 2026
-
Customize Multi-modal RAI Guardrails with Precedent-based predictions, COLM 2025, 2025
-
X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents, COLM 2025, 2025
-
Vulnerability of LLMs to Vertically Aligned Text Manipulations, ACL, 2025
-
Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models, CVPR, 2025
-
Vulnerability of Large Language Models to Output Prefix Jailbreaks: Impact of Positions on Safety, NAACL-Finding, 2025
-
SafeWorld: Geo-Diverse Safety Alignment, NeurIPS, 2024
-
FLIRT: Feedback Loop In-context Red Teaming, EMNLP, 2024
-
Data Advisor: Data Curation with Foresight for Safety Alignment of Large Language Models, EMNLP, 2024
-
Prompt-Driven LLM Safeguarding via Directed Representation Optimization, ICML, 2024