Presentation

MoConVQ: Unified Physics-based Motion Control via Scalable Discrete Representations
DescriptionWe present MoConVQ, a uniform framework enabling simulated avatars to acquire diverse skills from large, unstructured datasets. Leveraging a rich and scalable discrete skill representation, MoConVQ supports a broad range of applications, including pose estimation, interactive control, text-to-motion generation, and, more interestingly, integrating motion generation with Large Language Models (LLMs).
Event Type
Technical Paper
TimeThursday, 1 August 20242:20pm - 2:30pm MDT
LocationMile High 4
Session Time & Location
Sunday, 28 July 20246:00pm - 8:45pm MDTBluebird Ballroom
Thursday, 1 August 20242:00pm - 3:30pm MDTMile High 4
Interest Areas
Research & Education
Recordings
Livestreamed
Recorded
Keywords
Animation
Registration Categories
Full Conference
Full Conference Supporter
Virtual Access
Exhibitor Full Conference
Thursday