SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation
Yining Hong, Beide Liu, Maxine Wu, Yuanhao Zhai, Kai-Wei Chang, Linjie Li, Kevin Lin, Chung-Ching Lin, Jianfeng Wang, Zhengyuan Yang, Ying Nian Wu, and Lijuan Wang, in ICLR, 2025.
Abstract
Bib Entry
@inproceedings{hong2025slowfast,
title = {SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation},
author = {Hong, Yining and Liu, Beide and Wu, Maxine and Zhai, Yuanhao and Chang, Kai-Wei and Li, Linjie and Lin, Kevin and Lin, Chung-Ching and Wang, Jianfeng and Yang, Zhengyuan and Wu, Ying Nian and Wang, Lijuan},
booktitle = {ICLR},
year = {2025}
}
Related Publications
-
HoneyBee: Data Recipes for Vision-Language Reasoners, CVPR, 2026
-
MotionEdit: Benchmarking and Learning Motion-Centric Image Editing, CVPR, 2026
-
LaViDa: A Large Diffusion Language Model for Multimodal Understanding, NeurIPS, 2025
-
PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding, NeurIPS, 2025
-
STIV: Scalable Text and Image Conditioned Video Generation, ICCV, 2025
-
Verbalized Representation Learning for Interpretable Few-Shot Generalization, ICCV, 2025
-
Contrastive Visual Data Augmentation, ICML, 2025
-
SYNTHIA: Novel Concept Design with Affordance Composition, ACL, 2025
-
Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation, Nature Communications, 2025
-
Enhancing Large Vision Language Models with Self-Training on Image Comprehension, NeurIPS, 2024
-
CoBIT: A Contrastive Bi-directional Image-Text Generation Model, ICLR, 2024
-
DesCo: Learning Object Recognition with Rich Language Descriptions, NeurIPS, 2023
-
"What's 'up' with vision-language models? Investigating their struggle to understand spatial relations.", EMNLP, 2023
-
Text Encoders are Performance Bottlenecks in Contrastive Vision-Language Models, EMNLP, 2023
-
MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models, ACL (short), 2023
-
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge, CVPR, 2023
-
Grounded Language-Image Pre-training, CVPR, 2022
-
How Much Can CLIP Benefit Vision-and-Language Tasks?, ICLR, 2022