BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250522T212948Z
LOCATION:Mile High 1
DTSTART;TZID=America/Denver:20240729T104500
DTEND;TZID=America/Denver:20240729T121500
UID:siggraph_SIGGRAPH 2024_sess116@linklings.com
SUMMARY:3D Face Generator and Animation
DESCRIPTION:S3: Speech, Script, and Scene Driven Head and Eye Animation\n\
 nOur method generates 3D head and eye animations for characters in convers
 ation using audio, scripts, and scene inputs. It incorporates animator goa
 ls and psycho-linguistic insights, producing realistic animations that ali
 gn with human behavior and compares favorably with existing approaches, as
  conf...\n\n\nYifang Pan (University of Toronto; Jali Research, Canada); R
 ishabh Agrawal (Jali Research, Canada); and Karan Singh (University of Tor
 onto; Jali Research, Canada)\n---------------------\nMedia2Face: Co-speech
  Facial Animation Generation With Multi-modality Guidance\n\nMedia2Face ca
 n generate highly realistic and expressive 3D facial animations from diver
 se multimedia inputs — audio, text, and images — trained on the largest ev
 er co-speech 3D facial animation dataset. With Media2Face, avatars can now
  embody complex inner emotions with unprecedented fid...\n\n\nQingcheng Zh
 ao, Pengyu Long, and Qixuan Zhang (ShanghaiTech University, Deemos Technol
 ogy); Dafei Qin (University of Hong Kong, Deemos Technology); Han Liang (S
 hanghaiTech University); Longwen Zhang (ShanghaiTech University, Deemos Te
 chnology); Yingliang Zhang (DGene Digital Technology Co., Ltd.); and Jingy
 i Yu and Lan Xu (ShanghaiTech University)\n---------------------\n3D Face 
 Generator and Animation - Interactive Discussion\n\nAfter the summary pres
 entations, attendees will participate in an interactive discussion. Distri
 buted around the room will be a series of poster boards for authors to gat
 her around with the audience. Authors are invited to bring any material re
 lated to their paper that could instigate further conver...\n\n-----------
 ----------\nDiffPoseTalk: Speech-driven Stylistic 3D Facial Animation and 
 Head Pose Generation via Diffusion Models\n\nDiffPoseTalk introduces a nov
 el diffusion-based system for generating speech-driven facial animations a
 nd head poses, featuring example-based style control through contrastive l
 earning. It overcomes the scarcity of 3D talking face data by utilizing re
 constructed 3DMM parameters from a newly develope...\n\n\nZhiyao Sun, Tian
  Lv, Sheng Ye, Matthieu Lin, and Jenny Sheng (Tsinghua University); Yu-Hui
  Wen (Beijing Jiaotong University); Minjing Yu (Tianjin University); and Y
 ong-Jin Liu (Tsinghua University)\n---------------------\nPortrait3D: Text
 -guided High-quality 3D Portrait Generation Using Pyramid Representation a
 nd GANs Prior\n\nWe present Portrait3D, a novel text-to-3D-portrait genera
 tion framework that produces high-quality, view-consistent, realistic, and
  canonical 3D portraits that are in alignment with the input text prompts.
 \n\n\nYiqian Wu, Hao Xu, and Xiangjun Tang (Zhejiang University; State Key
  Lab of CAD&CG, Zhejiang University); Xien Chen (Yale University); Siyu Ta
 ng (ETH Zürich); Zhebin Zhang and Chen Li (OPPO US Research Center); and X
 iaogang Jin (Zhejiang University; State Key Lab of CAD&CG, Zhejiang Univer
 sity)\n---------------------\nHeadArtist: Text-conditioned 3D Head Generat
 ion With Self Score Distillation\n\nWe present HeadArtist for 3D head gene
 ration and editing following human-language descriptions. With proposed se
 lf-score distillation (SSD), we comes up with an efficient pipeline that o
 ptimizes a parameterized 3D head model under the supervision of a landmark
 -guided ControlNet itself. Experimental ...\n\n\nHongyu Liu (HKUST), Xuan 
 Wang (Ant Group), Ziyu Wan (City University of Hong Kong), Yujun Shen (Ant
  Group), Yibing Song (Alibaba DAMO Academy), Jing Liao (City University of
  Hong Kong), and Qifeng Chen (HKUST)\n---------------------\nToonify3D: St
 yleGAN-based 3D Stylized Face Generator\n\nWe present a StyleGAN-based 3D 
 stylized face generation framework, Toonify3D, which automatically creates
  full-head 3D stylized avatar mesh models and supports GAN-based 3D facial
  expression editing.\n\n\nWonjong Jang, Yucheol Jung, Hyomin Kim, Gwangjin
  Ju, Chaewon Son, Jooeun Son, and Seungyong Lee (POSTECH)\n\nInterest Area
 : Research & Education\n\nKeyword: Animation, Geometry, Modeling\n\nRegist
 ration Category: Full Conference, Full Conference Supporter, Virtual Acces
 s, Exhibitor Full Conference, Monday\n\nSession Chair: Nanxuan Zhao (Adobe
  Research)
END:VEVENT
END:VCALENDAR
