Share this page:

Vulnerability of LLMs to Vertically Aligned Text Manipulations

Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Zhen Xiong, Nanyun Peng, and Kai-Wei Chang, in ACL, 2025.

Download the full text


Abstract

Text classification involves categorizing a given text, such as determining its sentiment or identifying harmful content. With the advancement of large language models (LLMs), these models have become highly effective at performing text classification tasks. However, they still show vulnerabilities to variations in text formatting. Recent research demonstrates that modifying input formats, such as vertically aligning words for encoder-based models, can substantially lower accuracy in text classification tasks. While easily understood by humans, these inputs can significantly mislead models, posing a potential risk of bypassing detection in real-world scenarios involving harmful or sensitive information. With the expanding application of LLMs, a crucial question arises: Do decoder-based LLMs exhibit similar vulnerabilities to vertically formatted text input? In this paper, we investigate the impact of vertical text input on the performance of various LLMs across multiple text classification datasets and analyze the underlying causes. Our findings are as follows: (i) Vertical text input significantly degrades the accuracy of LLMs in text classification tasks. (ii) Chain of Thought (CoT) reasoning does not help LLMs recognize vertical input or mitigate its vulnerability, but few-shot learning with careful analysis does. (iii) We explore the underlying cause of the vulnerability by analyzing the inherent issues in tokenization and attention matrices.


Bib Entry

@inproceedings{li2025vulnerability,
  title = {Vulnerability of LLMs to Vertically Aligned Text Manipulations},
  author = {Li, Zhecheng and Wang, Yiwei and Hooi, Bryan and Cai, Yujun and Xiong, Zhen and Peng, Nanyun and Chang, Kai-Wei},
  booktitle = {ACL},
  year = {2025}
}

Related Publications

  1. MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Knowledge Poisoning Attacks, ACL, 2026
  2. SWAN: Semantic Watermarking with Abstract Meaning Representation, ACL, 2026
  3. Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy, ACL, 2026
  4. ARES: Adaptive Red-Teaming and End-to-End Repair of Policy-Reward System, ACL, 2026
  5. Open-Domain Safety Policy Construction, EACL-Findings, 2026
  6. Customize Multi-modal RAI Guardrails with Precedent-based predictions, COLM 2025, 2025
  7. X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents, COLM 2025, 2025
  8. Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models, CVPR, 2025
  9. Vulnerability of Large Language Models to Output Prefix Jailbreaks: Impact of Positions on Safety, NAACL-Finding, 2025
  10. SafeWorld: Geo-Diverse Safety Alignment, NeurIPS, 2024
  11. FLIRT: Feedback Loop In-context Red Teaming, EMNLP, 2024
  12. Data Advisor: Data Curation with Foresight for Safety Alignment of Large Language Models, EMNLP, 2024
  13. Prompt-Driven LLM Safeguarding via Directed Representation Optimization, ICML, 2024