S T A R

Spatial-Temporal Augmentation with Text-to-Video Models for Real-World Video Super-Resolution

Rui Xie1*,   Yinhong Liu1*,   Penghao Zhou2,   Chen Zhao1,   Jun Zhou3,   Kai Zhang1,   Zhenyu Zhang1,   Jian Yang1,   Zhenheng Yang2,   Ying Tai1†
1Nanjing University, 2ByteDance, 3Southwest University
*Equal contribution, Corresponding author

Real-World Videos (upscale ×4)

AIGC Videos (upscale ×4)


STAR is a Spatio-Temporal quality Augmentation framework for Real-world VSR,
which is the first to integrate diverse, powerful text-to-video diffusion priors into real-world VSR.

Abstract

Image diffusion models have been adapted for real-world video super-resolution to tackle over-smoothing issues in GAN-based methods. However, these models struggle to maintain temporal consistency, as they are trained on static images, limiting their ability to capture temporal dynamics effectively. Integrating text-to-video (T2V) models into video super-resolution for improved temporal modeling is straightforward. However, two key challenges remain: artifacts introduced by complex degradations in real-world scenarios, and compromised fidelity due to the strong generative capacity of powerful T2V models (e.g., CogVideoX-5B). To enhance the spatio-temporal quality of restored videos, we introduce STAR (Spatial-Temporal Augmentation with T2V models for Real-world video super-resolution), a novel approach that leverages T2V models for real-world video super-resolution, achieving realistic spatial details and robust temporal consistency. Specifically, we introduce a Local Information Enhancement Module (LIEM) before the global attention block to enrich local details and mitigate degradation artifacts. Moreover, we propose a Dynamic Frequency (DF) Loss to reinforce fidelity, guiding the model to focus on different frequency components across diffusion steps.

Method

STAR includes four modules: VAE, text encoder, ControlNet, and T2V model with Local Information Enhancement Module (LIEM) designed to alleviate the artifacts. Dynamic Frequency (DF) Loss is introduced to adaptively adjust the constraint on high- and low-frequency components across diffusion steps. With the proposed LIEM and DF loss, STAR achieves high spatio-temporal quality, reduced artifacts and enhanced fidelity.

STAR Demo

Comparison with SOTA

Quantitative Comparison

Qualitative Comparison

  • Slide 1
  • Slide 2
  • Slide 3
  • Slide 4
Upscale-A-Video
MGLDVSR
RealViformer
Ours

BibTeX

@misc{xie2025starspatialtemporalaugmentationtexttovideo,
        title={STAR: Spatial-Temporal Augmentation with Text-to-Video Models for Real-World Video Super-Resolution}, 
        author={Rui Xie and Yinhong Liu and Penghao Zhou and Chen Zhao and Jun Zhou and Kai Zhang and Zhenyu Zhang and Jian Yang and Zhenheng Yang and Ying Tai},
        year={2025},
        eprint={2501.02976},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2501.02976}, 
  }