BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250522T212948Z
LOCATION:Mile High 1
DTSTART;TZID=America/Denver:20240730T104500
DTEND;TZID=America/Denver:20240730T121500
UID:siggraph_SIGGRAPH 2024_sess105@linklings.com
SUMMARY:NeRFs and Lighting
DESCRIPTION:EyeIR: Single Eye Image Inverse Rendering in the Wild\n\nWe pr
 opose a method to decompose a single eye image into albedo, shading, specu
 lar, normal, and illumination component. We create a diverse synthetic eye
  dataset and design a synthetic-to-real adaptation framework for self-supe
 rvised learning on real images. We also design a method which specificall.
 ..\n\n\nShijun Liang (Beihang University); Haofei Wang (Peng Cheng Laborat
 ory); and Feng Lu (Beihang University, Peng Cheng Laboratory)\n-----------
 ----------\nLightFormer: Light-oriented Global Neural Rendering in Dynamic
  Scene\n\nThe generation of global illumination in real-time has been a lo
 ng-standing challenge in the graphics community. This work presents a neur
 al rendering approach inspired by many-lights methods and Transformer mode
 l, dubbed LightFormer, that can generate realistic global illumination for
  fully dynamic...\n\n\nHaocheng Ren (State Key Laboratory of CAD & CG, Zhe
 jiang University); Yuchi Huo (State Key Laboratory of CAD & CG, Zhejiang U
 niversity; Zhejiang Lab); Yifan Peng (University of Hong Kong); and Hongta
 o Sheng, Weidong Xue, Hongxiang Huang, Jingzhen Lan, Rui Wang, and Hujun B
 ao (State Key Laboratory of CAD & CG, Zhejiang University)\n--------------
 -------\nNeLT: Object-oriented Neural Light Transfer\n\nOur work presents 
 object-oriented neural light transfer (NeLT), a novel modular neural repre
 sentation of the dynamic light transportation between an object and the en
 vironment. It enables interactive rendering with global illumination for d
 ynamic scenes and achieves comparable quality to the recent ...\n\n\nChuan
 kun Zheng, Yuchi Huo, Shaohua Mo, Zhihua Zhong, and Zhizhen Wu (Zhejiang U
 niversity  State Key Lab of CAD&CG); Wei Hua (Zhejiang Lab); Rui Wang and 
 Hujun Bao (Zhejiang University  State Key Lab of CAD&CG); and Chuankun Zhe
 ng\n---------------------\nLite2Relight: 3D-aware Single Image Portrait Re
 lighting\n\nLite2Relight presents a novel approach for achieving 3D consis
 tent viewpoint editing and relighting in portraits at interactive speeds. 
 Lite2Relight expands upon the generative capabilities of EG3D with an effi
 cient lighting manipulation in the latent manifold, leveraging a lightstag
 e dataset to mod...\n\n\nPramod Rao (Max Planck Institute for Informatics;
  Saarbrücken Research Center for Visual Computing, Interaction and Artific
 ial Intelligence (VIA)); Gereon Fox (Max Planck Institute for Informatics)
 ; Abhimitra Meka (Google AR/VR); Mallikarjun B R and Fangneng Zhan (Max Pl
 anck Institute for Informatics); Tim Weyrich (Friedrich-Alexander-Universi
 tät Erlangen-Nürnberg (FAU)); Bernd Bickel (ETH Zürich, Institute of Scien
 ce and Technology Austria (ISTA)); Hanspeter Pfister (Harvard University);
  Wojciech Matusik (Massachusetts Institute of Technology (MIT)); Mohamed E
 lgharib (Max Planck Institute for Informatics); and Christian Theobalt (Ma
 x Planck Institute for Informatics; Saarbrücken Research Center for Visual
  Computing, Interaction and Artificial Intelligence (VIA))\n--------------
 -------\nNeRFs and Lighting - Interactive Discussion\n\nAfter the summary 
 presentations, attendees will participate in an interactive discussion. Di
 stributed around the room will be a series of poster boards for authors to
  gather around with the audience. Authors are invited to bring any materia
 l related to their paper that could instigate further conver...\n\n-------
 --------------\nNeRF as a Non-distant Environment Emitter in Physics-based
  Inverse Rendering\n\nWe propose utilizing NeRF as a non-distant environme
 nt lighting model in an inverse rendering pipeline. We demonstrate that ou
 r NeRF-based emitter more precisely models scene lighting than the convent
 ional environment map, consequently enhancing the accuracy of inverse rend
 ering.\n\n\nJingwang Ling, Ruihan Yu, and Feng Xu (Tsinghua University); C
 hun Du (Tibet University); and Shuang Zhao (University of California Irvin
 e)\n---------------------\n3D Gaussian Splatting With Deferred Reflection\
 n\nWe present a deferred shading method to effectively render specular ref
 lection with Gaussian splatting, achieving high-quality rendering superior
  to state-of-the-art methods for both synthetic and real-world scenes. It 
 runs at real-time frame rates almost identical to vanilla Gaussian splatti
 ng and ...\n\n\nKeyang Ye, Qiming Hou, and Kun Zhou (Zhejiang University)\
 n\nInterest Area: Research & Education\n\nKeyword: Geometry, Modeling, Ren
 dering\n\nRegistration Category: Full Conference, Full Conference Supporte
 r, Virtual Access, Exhibitor Full Conference, Tuesday\n\nSession Chair: Ma
 rkus Steinberger (Graz University of Technology, Huawei)
END:VEVENT
END:VCALENDAR
