Presentation
DiffPoseTalk: Speech-driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models
DescriptionDiffPoseTalk introduces a novel diffusion-based system for generating speech-driven facial animations and head poses, featuring example-based style control through contrastive learning. It overcomes the scarcity of 3D talking face data by utilizing reconstructed 3DMM parameters from a newly developed audio-visual dataset, enabling the generation of diverse and stylistic motions.
Event Type
Technical Paper
TimeMonday, 29 July 202411:15am - 11:25am MDT
LocationMile High 1
ACM Digital Library
Journal Papers' PDFs
Conference Papers' PDFs
Conference Papers' PDFs
Session Time & Location
Research & Education
Livestreamed
Recorded
Animation
Geometry
Modeling
Full Conference
Full Conference Supporter
Virtual Access
Exhibitor Full Conference
Monday