BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250522T212948Z
LOCATION:Mile High 2A
DTSTART;TZID=America/Denver:20240729T090000
DTEND;TZID=America/Denver:20240729T103000
UID:siggraph_SIGGRAPH 2024_sess136@linklings.com
SUMMARY:Material Texture Generation and Painting
DESCRIPTION:EASI-Tex: Edge-Aware Mesh Texturing from Single Image\n\nWe in
 troduce a novel approach for single-image mesh texturing, which employs a 
 pre-trained image diffusion model with judicious conditioning to seamlessl
 y transfer texture from a single image to a given 3D mesh object, without 
 any optimization or training.\n\n\nSai Raj Kishore Perla, Yizhi Wang, and 
 Ali Mahdavi-Amiri (Simon Fraser University) and Hao Zhang (Simon Fraser Un
 iversity, Amazon)\n---------------------\nDreamMat: High-quality PBR Mater
 ial Generation With Geometry- and Light-aware Diffusion Models\n\nThis pap
 er introduces DreamMat, a novel approach for creating high-quality appeara
 nces on an untextured mesh by generating PBR materials from a geometry- an
 d light-aware diffusion model. The resulting PBR materials are free from b
 aked-in lighting and shadows, suitable for applications in gaming or f...\
 n\n\nYuqing Zhang (Zhejiang University; State Key Lab CAD&CG, Zhejiang Uni
 versity, ZJU-Tencent Game and Intelligent Graphics Innovation Technology J
 oint Lab); Yuan Liu (Tencent Technology); Zhiyu Xie (Zhejiang University; 
 State Key Lab CAD&CG, Zhejiang University, ZJU-Tencent Game and Intelligen
 t Graphics Innovation Technology Joint Lab); Lei Yang (Tencent Technology,
  Bournemouth University); Zhongyuan Liu, Mengzhou Yang, Runze Zhang, Qilon
 g Kou, and Cheng Lin (Tencent Technology); Wenping Wang (Texas A&M Univers
 ity); and Xiaogang Jin (Zhejiang University; State Key Lab of CAD&CG, Zhej
 iang University)\n---------------------\nMaterial Texture Generation and P
 ainting - Interactive Discussion\n\nAfter the summary presentations, atten
 dees will participate in an interactive discussion. Distributed around the
  room will be a series of poster boards for authors to gather around with 
 the audience. Authors are invited to bring any material related to their p
 aper that could instigate further conver...\n\n---------------------\nTexP
 ainter: Generative Mesh Texturing With Multi-view Consistency\n\nWe propos
 e a novel texture generation method with multi-view consistency using a pr
 e-trained 2D diffusion model. Drawing on the principle of DDIM scheme and 
 its adept prediction of noisy latent. Our method focuses on explicitly con
 trolling the consistency of texture while preserving the denoise proc...\n
 \n\nHongkun Zhang (Southeast University); Zherong Pan (LightSpeed Studios)
 ; Congyi Zhang (University of British Columbia, TransGP); Lifeng Zhu (Sout
 heast University); and Xifeng Gao (LightSpeed Studios)\n------------------
 ---\nDiffusion Texture Painting\n\nWe present a technique that leverages 2
 D generative diffusion models for interactive texture painting on the surf
 ace of 3D meshes. Unlike existing texture painting systems, our method all
 ows artists to paint with any complex image texture. Our brush generates s
 eamless strokes in real-time and inpain...\n\n\nAnita Hu (NVIDIA); Nishkri
 t Desai (University of Toronto, Vector Institute); Hassan Abu Alhaija (NVI
 DIA); Seung Wook Kim (NVIDIA, University of Toronto); and Maria Shugrina (
 NVIDIA)\n---------------------\nMaPa: Text-driven Photorealistic Material 
 Painting for 3D Shapes\n\nThis paper aims to generate materials for 3D mes
 hes from text descriptions. We propose to generate segment-wise procedural
  material graphs as the appearance representation, which supports high-qua
 lity rendering and provides substantial flexibility in editing. Extensive 
 experiments demonstrate superi...\n\n\nShangzhan Zhang (State Key Lab of C
 AD and CG, Zhejiang University; Ant Group); Sida Peng, Tao Xu, Yuanbo Yang
 , and Tianrun Chen (Zhejiang University); Nan Xue and Yujun Shen (Ant Grou
 p); Hujun Bao (State Key Lab of CAD and CG, Zhejiang University); Ruizhen 
 Hu (Shenzhen University); and Xiaowei Zhou (State Key Lab of CAD and CG, Z
 hejiang University)\n---------------------\nTexSliders: Diffusion-based Te
 xture Editing in CLIP Space\n\nWe propose a novel, diffusion-based approac
 h for texture editing. We define editing directions using simple text prom
 pts, map these to CLIP image-embedding space, and project the directions t
 o a CLIP subspace that minimizes identity variations. Our editing pipeline
  facilitates the creation of arbitr...\n\n\nJulia Guerrero-Viu (Universida
 d de Zaragoza), Milos Hasan and Arthur Roullier (Adobe Research), Midhun H
 arikumar (Adobe), Yiwei Hu and Paul Guerrero (Adobe Research), Diego Gutié
 rrez and Belen Masia (Universidad de Zaragoza), and Valentin Deschaintre (
 Adobe Research)\n\nInterest Area: Research & Education\n\nKeyword: AI, Ren
 dering\n\nRegistration Category: Full Conference, Full Conference Supporte
 r, Virtual Access, Exhibitor Full Conference, Monday\n\nSession Chair: Tam
 y Boubekeur (Adobe)
END:VEVENT
END:VCALENDAR
