BVSR-EvD: Blurry Video Space-Time Super-Resolution With Events via Diffusion Models

WM Weng and YY Zhang and WH Xu and ZY Xiao and ZW Xiong, IEEE TRANSACTIONS ON IMAGE PROCESSING, 34, 8390-8405 (2025).

DOI: 10.1109/TIP.2025.3642622

Video restoration from low-resolution and low-frame-rate blurry sources remains challenging due to insufficient data priors. In this paper, we propose BVSR-EvD, leveraging event cameras and diffusion models to boost blurry video space-time super-resolution. Specifically, we identify three distinct data priors from event-video dual modalities: motion prior from events, content prior from videos, and physical prior from their integration, contributing to temporal stability, content preservation, and detail enhancement respectively. To effectively utilize these data priors, BVSR-EvD creates the Trident Diffusion Model (Trident-DM), which decomposes each denoising step into trident decoupling and adaptive self-composition stages. The former employs single-modal and dual-modal meta-networks to extract the three unique data priors, while the latter dynamically integrates them through learned prior-aware weight maps. BVSR-EvD achieves up to x8 spatial super-resolution and x64 temporal super-resolution from blurry videos, surpassing existing methods on public video datasets.

Return to Publications page