AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion

Abstract: The task of video generation requires synthesizing visually realistic and temporally coherent video frames. Existing methods primarily use asynchronous auto-regressive models or synchronous diffusion models to address this challenge. However, asynchronous auto-regressive models often suffer from inconsistencies between training and inference, leading to issues such as error accumulation, while synchronous diffusion models are limited by their reliance on rigid sequence length. To address these issues, we introduce Auto-Regressive Diffusion (AR-Diffusion), a novel model that combines the strengths of auto-regressive and diffusion models for flexible, asynchronous video generation. Specifically, our approach leverages diffusion to gradually corrupt video frames in both training and inference, blackucing the discrepancy between these phases. Inspiblack by auto-regressive generation, we incorporate a non-decreasing constraint on the corruption timesteps of individual frames, ensuring that earlier frames remain clearer than subsequent ones. This setup, together with temporal causal attention, enables flexible generation of videos with varying lengths while preserving temporal coherence. In addition, we design two specialized timestep schedulers: the FoPP scheduler for balanced timestep sampling during training, and the AD scheduler for flexible timestep differences during inference, supporting both synchronous and asynchronous generation. Extensive experiments demonstrate the superiority of our proposed method, which achieves competitive and state-of-the-art results across four challenging benchmarks.

Videos presented on this page are encoded according to H.264 format, which can be displayed through Google Chrome.

128-frame Video Generation on FaceForesnics

Diffusion Forcing

AR-Diffusion

16-frame Video Generation on FaceForesnics

Diffusion Forcing

AR-Diffusion

16-frame Video Generation on Sky-Timelapse

Latte

Diffusion Forcing

AR-Diffusion

16-frame Video Generation on Taichi-HD

Latte

Diffusion Forcing

AR-Diffusion

16-frame Video Generation on UCF-101

Latte

Diffusion Forcing

AR-Diffusion