Yangguang Li
CUHK
ProfileDate
October 19, 2025
Time
1:00 PM-5:00 PM(HST)
Location
Honolulu, Hawai'i
Diffusion models and autoregressive models have achieved groundbreaking progress in image and video generation, enabling the synthesis of high-fidelity content from text or image conditional inputs. However, extending these successes into 3D generation still faces significant challenges. Early optimization-based methods (e.g., DreamFusion) and reconstruction-based methods (e.g., LRM) laid the groundwork for this field, but often suffer from limited scalability, insufficient generation quality, or poor generalization. With further research, and inspired by the breakthroughs in image and video generation tasks, diffusion- and autoregressive-based methods for 3D generation have gradually emerged, opening new directions for 3D asset creation.
This tutorial will provide a systematic introduction to diffusion- and autoregressive-based 3D content generation, with a focus on data processing, algorithmic design, model training, and application prospects. Specifically, we will cover:
The goal of this tutorial is to bridge fundamental research in 3D generative modeling with practical applications in 3D asset creation. It is designed for a diverse audience:
By consolidating current achievements and exploring future directions, this tutorial aims to help participants acquire a comprehensive understanding of the core techniques and challenges in 3D asset generation, providing inspiration for further academic research and industrial applications.