Share this page:

PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding

Ansel Blume, Jeonghwan Kim, Hyeonjeong Ha, Elen Chatikyan, Xiaomeng Jin, Khanh Duy Nguyen, Nanyun Peng, Kai-Wei Chang, Derek Hoiem, and Heng Ji, in NeurIPS, 2025.

Spotlight (top 5% papers)

Code

Download the full text


Abstract

Real-world objects are composed of distinct, object-specific parts that support fine-grained reasoning. However, large multimodal models (LMMs) struggle to identify parts and reason about part-whole relationships. This paper introduces PARTONOMY, an LMM benchmark designed for pixel-level part grounding. The benchmark combines existing part datasets and a new annotated set comprising 862 part labels and 534 object labels. Experiments reveal that state-of-the-art segmenting LMMs perform poorly on part-level tasks (e.g., a strong model attains only 5.9% global IoU), highlighting a major capability gap. The authors identify architectural shortcomings in current segmenting LMMs, such as using [SEG] tokens and discarding predicted segmentations, and train several part-centric LMMs to address these issues. They propose PLUM, a novel segmenting LMM that uses span tagging and conditions on prior predictions in a feedback loop. PLUM trained on PARTONOMY achieves stronger performance on reasoning-based segmentation, VQA and visual hallucination benchmarks, opening avenues for more grounded visual understanding in LMMs.


Bib Entry

@inproceedings{blume2025partonomy,
  title = {PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding},
  author = {Blume, Ansel and Kim, Jeonghwan and Ha, Hyeonjeong and Chatikyan, Elen and Jin, Xiaomeng and Nguyen, Khanh Duy and Peng, Nanyun and Chang, Kai-Wei and Hoiem, Derek and Ji, Heng},
  booktitle = {NeurIPS},
  year = {2025}
}

Related Publications

  1. HoneyBee: Data Recipes for Vision-Language Reasoners, CVPR, 2026
  2. MotionEdit: Benchmarking and Learning Motion-Centric Image Editing, CVPR, 2026
  3. LaViDa: A Large Diffusion Language Model for Multimodal Understanding, NeurIPS, 2025
  4. STIV: Scalable Text and Image Conditioned Video Generation, ICCV, 2025
  5. Verbalized Representation Learning for Interpretable Few-Shot Generalization, ICCV, 2025
  6. Contrastive Visual Data Augmentation, ICML, 2025
  7. SYNTHIA: Novel Concept Design with Affordance Composition, ACL, 2025
  8. SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation, ICLR, 2025
  9. Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation, Nature Communications, 2025
  10. Enhancing Large Vision Language Models with Self-Training on Image Comprehension, NeurIPS, 2024
  11. CoBIT: A Contrastive Bi-directional Image-Text Generation Model, ICLR, 2024
  12. DesCo: Learning Object Recognition with Rich Language Descriptions, NeurIPS, 2023
  13. "What's 'up' with vision-language models? Investigating their struggle to understand spatial relations.", EMNLP, 2023
  14. Text Encoders are Performance Bottlenecks in Contrastive Vision-Language Models, EMNLP, 2023
  15. MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models, ACL (short), 2023
  16. REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge, CVPR, 2023
  17. Grounded Language-Image Pre-training, CVPR, 2022
  18. How Much Can CLIP Benefit Vision-and-Language Tasks?, ICLR, 2022