Input-Aware Sparse Attention for Real-Time Co-Speech Video Generation

1Carnegie Mellon University, 2PAII Inc.
SIGGRAPH Asia 2025
Visual summary

We introduce a conditional video distillation method for real-time co-speech video generation that leverages human pose conditioning for input-aware sparse attention and distillation loss. Our student model achieves 25.3 FPS, a 13.1X speedup over its teacher model, while preserving visual quality. Our method significantly improves motion coherence and lip synchronization over a leading few-step causal student model, while reducing common visual degradation in the speaker's face and hands (see yellow box).

Abstract

Diffusion models can synthesize realistic co-speech video from audio for various applications, such as video creation and virtual agents. However, existing diffusion-based methods are slow due to numerous denoising steps and costly attention mechanisms, preventing real-time deployment. In this work, we distill a many-step diffusion video model into a few-step student model. Unfortunately, directly applying recent diffusion distillation methods degrades video quality and falls short of real-time performance.

To address these issues, our new video distillation method leverages input human pose conditioning for both attention and loss functions. We first propose using accurate correspondence between input human pose keypoints to guide attention to relevant regions, such as the speaker's face, hands, and upper body. This input-aware sparse attention reduces redundant computations and strengthens temporal correspondences of body parts, improving inference efficiency and motion coherence. To further enhance visual quality, we introduce an input-aware distillation loss that improves lip synchronization and hand motion realism. By integrating our input-aware sparse attention and distillation loss, our method achieves real-time performance with improved visual quality compared to recent audio-driven and input-driven methods. We also conduct extensive experiments showing the effectiveness of our algorithmic design choices.



Method Overview

Method overview

Our attention mechanism selectively focuses on tokens within salient body regions and their corresponding areas in temporally relevant frames. (a) We first apply global masking, which restricts attention to the K most similar past frames based on pose similarity. (b) Then local masking limits inter-frame attention to matched regions (e.g., face, hands) to enhance temporal coherence. (c) Our input-aware attention masking integrates both global and local masks to form an efficient and structured sparse attention pattern.



Comparison to Baselines

To comprehensively evaluate the effectiveness of our model, we compare it with state-of-the-art open-source methods in both audio-driven and pose-driven video generation settings.


Audio-driven comparison

Our method demonstrates clear improvements over existing audio-driven methods in lip-audio synchronization, expressive hand gestures, and overall visual quality. Specifically, our generated videos exhibit better lip-audio synchronization, where the lip movements align more naturally with the speech content. Additionally, the overall visual quality of the generated videos is significantly higher, producing sharper and more realistic appearances compared to the blurry or less realistic results from the baselines.


Pose-driven comparison

Our method generates more natural lip and hand animations than pose-driven baselines. Existing pose-driven methods often produce stiff or unnatural movements in these critical regions. In contrast, our model maintains high fidelity and realism, with lifelike facial and hand animations, while achieving significantly faster inference.



Gallery

Given only a single static reference image and an input audio clip, our model effectively synthesizes highly realistic and expressive video outputs. These results visually demonstrate its capability to produce natural facial expressions, fluid body movements, and accurate lip synchronization in real time.



Acknowledgements

We would like to thank Kangle Deng, Muyang Li, Nupur Kumari, Sheng-Yu Wang, Maxwell Jones, Gaurav Parmar for their insightful feedback and input that contributed to the finished work. The project is partly supported by Ping An Research.

BibTeX

@article{lu2025iasa,
  title={Input-Aware Sparse Attention for Real-Time Co-Speech Video Generation},
  author={Lu, Beijia and Chen, Ziyi and Xiao, Jing and Zhu, Jun-Yan},
  booktitle = {ACM SIGGRAPH Asia},
  year={2025}
}