Share this page:

SYNTHIA: Novel Concept Design with Affordance Composition

Hyeonjeong Ha, Xiaomeng Jin, Jeonghwan Kim, Jiateng Liu, Zhenhailong Wang, Khanh Duy Nguyen, Ansel Blume, Nanyun Peng, Kai-Wei Chang, and Heng Ji, in ACL, 2025.

Code

Download the full text


Abstract

Text-to-image (T2I) models enable rapid concept design, making them widely used in AI-driven design. While recent studies focus on generating semantic and stylistic variations of given design concepts, functional coherence–the integration of multiple affordances into a single coherent concept–remains largely overlooked. In this paper, we introduce SYNTHIA, a framework for generating novel, functionally coherent designs based on desired affordances. Our approach leverages a hierarchical concept ontology that decomposes concepts into parts and affordances, serving as a crucial building block for functionally coherent design. We also develop a curriculum learning scheme based on our ontology that contrastively fine-tunes T2I models to progressively learn affordance composition while maintaining visual novelty. To elaborate, we (i) gradually increase affordance distance, guiding models from basic concept-affordance association to complex affordance compositions that integrate parts of distinct affordances into a single, coherent form, and (ii) enforce visual novelty by employing contrastive objectives to push learned representations away from existing concepts. Experimental results show that SYNTHIA outperforms state-of-the-art T2I models, demonstrating absolute gains of 25.1% and 14.7% for novelty and functional coherence in human evaluation, respectively.


Bib Entry

@inproceedings{ha2025synthia,
  title = {SYNTHIA: Novel Concept Design with Affordance Composition},
  author = {Ha, Hyeonjeong and Jin, Xiaomeng and Kim, Jeonghwan and Liu, Jiateng and Wang, Zhenhailong and Nguyen, Khanh Duy and Blume, Ansel and Peng, Nanyun and Chang, Kai-Wei and Ji, Heng},
  booktitle = {ACL},
  year = {2025}
}

Related Publications

  1. HoneyBee: Data Recipes for Vision-Language Reasoners, CVPR, 2026
  2. MotionEdit: Benchmarking and Learning Motion-Centric Image Editing, CVPR, 2026
  3. LaViDa: A Large Diffusion Language Model for Multimodal Understanding, NeurIPS, 2025
  4. PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding, NeurIPS, 2025
  5. STIV: Scalable Text and Image Conditioned Video Generation, ICCV, 2025
  6. Verbalized Representation Learning for Interpretable Few-Shot Generalization, ICCV, 2025
  7. Contrastive Visual Data Augmentation, ICML, 2025
  8. SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation, ICLR, 2025
  9. Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation, Nature Communications, 2025
  10. Enhancing Large Vision Language Models with Self-Training on Image Comprehension, NeurIPS, 2024
  11. CoBIT: A Contrastive Bi-directional Image-Text Generation Model, ICLR, 2024
  12. DesCo: Learning Object Recognition with Rich Language Descriptions, NeurIPS, 2023
  13. "What's 'up' with vision-language models? Investigating their struggle to understand spatial relations.", EMNLP, 2023
  14. Text Encoders are Performance Bottlenecks in Contrastive Vision-Language Models, EMNLP, 2023
  15. MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models, ACL (short), 2023
  16. REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge, CVPR, 2023
  17. Grounded Language-Image Pre-training, CVPR, 2022
  18. How Much Can CLIP Benefit Vision-and-Language Tasks?, ICLR, 2022