Share this page:

STIV: Scalable Text and Image Conditioned Video Generation

Zongyu Lin, Wei Liu, Chen Chen, Jiasen Lu, Wenze Hu, Tsu-Jui Fu, Jesse Allardice, Zhengfeng Lai, Liangchen Song, Bowen Zhang, Cha Chen, Yiran Fei, Lezhi Li, Yizhou Sun, Kai-Wei Chang, and Yinfei Yang, in ICCV, 2025.

Download the full text


Abstract

We present a simple and scalable text and image conditioned video generation method. Our approach, named STIV, integrates a variable number of image conditions into a Diffusion Transformer (DiT) through frame replacement. This design enables STIV to perform both text-to-video (T2V) and text-image-to-video (TI2V) tasks simultaneously, as well as long video generation through autoregressive rollouts. Additionally, STIV can be easily extended to various applications, such as video prediction, frame interpolation, and multi-view generation, etc. With comprehensive ablation studies on T2I, T2V, TI2V, and long video generation, STIV demonstrate strong performance, despite its simple design. An 8.7B model with (512^2) resolution achieves 83.1 on VBench T2V, surpassing both leading open and closed-source models like CogVideoX-5B, Pika, Kling, and Gen-3. The same-sized model also achieves a state-of-the-art result of 90.1 on VBench I2V task at (512^2) resolution. Combine all of these, we finally scale up our model to 540p with over 200 frames. By providing a transparent recipe for building cutting-edge video generation models, we aim to empower future research and accelerate progress for video generation.


Bib Entry

@inproceedings{lin2025stiv,
  title = {STIV: Scalable Text and Image Conditioned Video Generation},
  author = {Lin, Zongyu and Liu, Wei and Chen, Chen and Lu, Jiasen and Hu, Wenze and Fu, Tsu-Jui and Allardice, Jesse and Lai, Zhengfeng and Song, Liangchen and Zhang, Bowen and Chen, Cha and Fei, Yiran and Li, Lezhi and Sun, Yizhou and Chang, Kai-Wei and Yang, Yinfei},
  booktitle = {ICCV},
  year = {2025}
}

Related Publications

  1. HoneyBee: Data Recipes for Vision-Language Reasoners, CVPR, 2026
  2. MotionEdit: Benchmarking and Learning Motion-Centric Image Editing, CVPR, 2026
  3. LaViDa: A Large Diffusion Language Model for Multimodal Understanding, NeurIPS, 2025
  4. PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding, NeurIPS, 2025
  5. Verbalized Representation Learning for Interpretable Few-Shot Generalization, ICCV, 2025
  6. Contrastive Visual Data Augmentation, ICML, 2025
  7. SYNTHIA: Novel Concept Design with Affordance Composition, ACL, 2025
  8. SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation, ICLR, 2025
  9. Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation, Nature Communications, 2025
  10. Enhancing Large Vision Language Models with Self-Training on Image Comprehension, NeurIPS, 2024
  11. CoBIT: A Contrastive Bi-directional Image-Text Generation Model, ICLR, 2024
  12. DesCo: Learning Object Recognition with Rich Language Descriptions, NeurIPS, 2023
  13. "What's 'up' with vision-language models? Investigating their struggle to understand spatial relations.", EMNLP, 2023
  14. Text Encoders are Performance Bottlenecks in Contrastive Vision-Language Models, EMNLP, 2023
  15. MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models, ACL (short), 2023
  16. REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge, CVPR, 2023
  17. Grounded Language-Image Pre-training, CVPR, 2022
  18. How Much Can CLIP Benefit Vision-and-Language Tasks?, ICLR, 2022