BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Denver
X-LIC-LOCATION:America/Denver
BEGIN:DAYLIGHT
TZOFFSETFROM:-0700
TZOFFSETTO:-0600
TZNAME:MDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0600
TZOFFSETTO:-0700
TZNAME:MST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250522T212947Z
LOCATION:Exhibit Hall F
DTSTART;TZID=America/Denver:20240801T110000
DTEND;TZID=America/Denver:20240801T153000
UID:siggraph_SIGGRAPH 2024_sess219@linklings.com
SUMMARY:Emerging Technologies Installations
DESCRIPTION:KineSway: Simulating Terrain Shaking Sensation by Alternating 
 Tendon Vibration to Ankles\n\nWe propose a cost-effective method to simula
 te ground sway and tilt in virtual environments using kinesthetic illusion
 s. By applying vibratory stimuli to ankle tendons, our technique creates a
  perceptual sensation of movement without physical space requirements or a
 ctual motion, offering a safe and ...\n\n\nEifu Narita, Keigo Ushiyama, Iz
 umi Mizoguchi, and Hiroyuki Kajimoto (University of Electro-Communications
 )\n---------------------\nControlling the Color Appearance of Objects by O
 ptimizing the Illumination Spectrum\n\nWe have developed an innovative lig
 hting system that changes specific target colors while maintaining a natur
 al white appearance, by using implementing precisely spectral controlling 
 of the spectral power distribution of illumination and metamerism. It was 
 successfully showcased at the Paris Fashion...\n\n\nMariko Yamaguchi, Masa
 ru Tsuchida, Takahiro Matsumoto, Tetsuro Tokunaga, and Takayoshi Mochiduki
  (NTT Corporation)\n---------------------\nDemonstrating Real-time Slow-mo
 tion Experience Through Parallel Video Presentation\n\nReal-time and slow-
 motion are incompatible in the technologies of recording and playback. We 
 propose the method to partially coexist both using parallel video presenta
 tion based on the integrative capacity and the temporal nature of cognitio
 n. Users can experience the surroundings in slow-motion mod...\n\n\nGoki M
 uramoto and Hiroto Saito (University of Tokyo), Sohei Wakisaka (Keio Unive
 rsity), Masahiko Inami (University of Tokyo), and Goki Mur\n--------------
 -------\nTangible Data: Immersive Data Exploration With Touch\n\nWe demons
 trate an interactive 3D data visualization tool called TangibleData. Tangi
 bleData adapts hand gestures and mid-air haptics to provide 3D data explor
 ation using hand gestures and ultrasound mid-air haptic feedback. We showc
 ase different types of 3D visualization with different data encoding ...\n
 \n\nAyush Bhardwaj and Jin Ryong Kim (University of Texas at Dallas)\n----
 -----------------\nNukabot: Design of Human-Microbe-Computer Entanglement\
 n\nNukabot, a human-computer interaction system, connects fermented food m
 icrobiomes with their human carers through voice communication. The porcel
 ain nukadoko uses chemical sensors, voice recognition, and vocal synthesis
  to monitor fermentation and allow humans to interact and ask questions ab
 out the...\n\n\nDominique Chen (Waseda University)\n---------------------\
 nMirror-Transcending Aerial Imaging System for Multiple Users\n\nWe presen
 t a system that enables multiple users to share the viewing experience of 
 mirror-transcending aerial imaging, in which virtual objects continuously 
 move between mirrored and physical spaces. The prototype looks like a thre
 e-sided mirror, allowing four to five users to participate in the vie...\n
 \n\nMotohiro Makiguchi, Ayami Hoshi, Ayaka Sano, Takahiro Kusabuka, Hirosh
 i Chigira, and Takayoshi Mochizuki (NTT Corporation)\n--------------------
 -\nRobotSketch: An Interactive Showcase of Superfast Design of Legged Robo
 ts\n\nWe showcase an interactive system that lets robot designers to direc
 tly 3D sketch and experience robots in real scale from an immersive VR wor
 kspace. The concept robots learn to walk through reinforcement learning in
  a physics simulation, allowing the designer to control them in real time 
 using VR c...\n\n\nJoon Hyub Lee, Hyunsik Oh, Taegyu Jin, Seung-Jun Lee, J
 unwoo Yoon, Jemin Hwangbo, and Seok-Hyung Bae (Korea Advanced Institute of
  Science and Technology (KAIST))\n---------------------\nHolographic Paral
 lax: 3D Holographic Near-eye Displays with Parallax Cues\n\nRecent advance
 ments in holographic displays enable the creation of 4D light field hologr
 ams featuring view-dependent visual effects. Our holographic display proto
 type delivers an experience of state-of-the-art 4D light field holograms a
 nd highlights the significance of parallax cues in enhancing per...\n\n\nS
 eung-Woo Nam and Dongyeon Kim (Seoul National University); Suyeon Choi (St
 anford University); Juhyun Lee and Siwoo Lee (Seoul National University); 
 Manu Gopakumar, Brian Chao, and Gordon Wetzstein (Stanford University); an
 d Yoonchan Jeong (Seoul National University)\n---------------------\nBreak
 ing the Low Sample Rate Equals Low Power Paradigm Using an Event Based Vis
 ion Approach for Hand Tracking in AR\n\nThis demo from Ultraleap presents 
 the first hand tracking pipeline using event cameras on an AR headset. Mot
 ivated by event cameras' ability to break the frame imaging paradigm of lo
 w power equates to low sample rate moves us closer to low power spatial co
 mputing in the glasses form factor.\n\n\nRyan Page, Paolo Baesso, Rory Cla
 rk, and Greg Baker (Ultraleap)\n---------------------\nA Live Demo of Sing
 le-photon Imaging and Applications\n\nSingle-photon sensors are a rapidly 
 evolving imaging technology featuring the unique ability of sensing light 
 at its finest possible granularity, viz., at the photon level. Our demo wi
 ll exhibit several exciting capabilities of single-photon sensors, which i
 s made possible by an interplay of novel a...\n\n\nSacha Jungerman and Var
 un Sundar (University of Wisconsin-Madison) and Mohit Gupta (University of
  Wisconsin Madison)\n---------------------\nDemonstrating Magnus: A Magnet
 ic Hand Exoskeleton for Fast and Dexterous Finger Actuation\n\nThis submis
 sion presents "Magnus," a novel hand exoskeleton powered by electromagnets
  that can provide high-speed, dexterous, and yet flexible finger actuation
 . Our demos feature assisted finger drumming that beyond original body cap
 abilities and a first-person shooting gaming scenario where two pla...\n\n
 \nJun Nishida (University of Maryland College Park, Sony Computer Science 
 Laboratories); Daisuke Tajima (Sony Computer Science Laboratories); Yasuko
  Namikawa (Sony Computer Science Laboratories, University of Tsukuba); and
  Shunichi Kasahara (Sony Computer Science Laboratories, Okinawa Institute 
 of Science and Technology Graduate University)\n---------------------\nFEE
 LTECH Wear: Enhancing Mixed Reality Experience With Wrist to Finger Haptic
  Attribution\n\nFEELTECH Wear is a system that supports hand interactions 
 with both virtual and real objects, allowing for seamless execution of eac
 h interaction. The hand-mounted device is equipped with four channels of r
 otational skin-stretch tactors at the wrist and vibration tactors at the t
 humb and index finge...\n\n\nRodan Umehara and Harunobu Taguchi (Keio Univ
 ersity Graduate School of Media Design); Arata Horie (Keio University Grad
 uate School of Media Design; commissure, Inc.); Yusuke Kamiyama and Shin S
 akamoto (SPLINE DESIGN HUB Corp.); Hironori Ishikawa (NTT DOCOMO, INC.); a
 nd Kouta Minamizawa (Keio University Graduate School of Media Design)\n---
 ------------------\nHaptoRoom: Using Vibrotactile Floor Interfaces to Enab
 le Reconfigurable Haptic Interaction Onto Any Furniture Surfaces\n\nHaptoR
 oom explores an innovative approach to reconfigurable haptic interaction t
 hrough the use of \narray of vibrotactile floor interfaces. This technolog
 y allows for the reconfiguration of haptic feedback to be applied onto any
  furniture surfaces, enhancing user interaction and experience in physica.
 ..\n\n\nKiryu Tsujita and Takatoshi Yoshida (Keio University Graduate Scho
 ol of Media Design); Kohei Kobayashi (Musashino Art University Department 
 of Imaging Arts and Sciences); Arata Horie (Keio University Graduate Schoo
 l of Media Design); Nobuhisa Hanamitsu (Keio University Graduate School of
  Media Design, Enhance Experience Inc.); and Kouta Minamizawa (Keio Univer
 sity Graduate School of Media Design)\n---------------------\nProject Star
 line: A High-fidelity Telepresence System\n\nExperience Project Starline, 
 the first photorealistic telepresence system that achieves a sense of co-p
 resence between remote participants that demonstrably outperforms today's 
 2D videoconferencing systems as measured by participant ratings, meeting r
 ecall, and nonverbal behaviors.\n\n\nJason Lawrence, Ryan Overbeck, Todd P
 rives, Tommy Fortes, Nikki Roth, and Brett Newman (Google)\n--------------
 -------\nEmBelt: A Haptic Device for Extended Reality Experience to Better
  Understand Body Image Concerns-Related Eating Disorders\n\nEmBelt is a ha
 ptic device integrated with VR storytelling that creates an XR experience 
 designed to enhance the understanding of eating disorders related to body 
 image concerns among individuals without such anxieties. The compression l
 evels respond to user interactions, providing a portrayal linked...\n\n\nH
 suan-Hui Yi and Yi-Chun Ko (National Tsing Hua University)\n--------------
 -------\nFIRE: Mid-air Thermo-tactile Display\n\nThis demonstration uses a
 n ultrasound haptic display-based mid-air thermo-tactile feedback system. 
 Our method involves directing heated airflow toward the focused pressure p
 oint produced by the ultrasound display to concurrently provide thermal an
 d tactile cues in mid-air. We present this system wit...\n\n\nYatharth Sin
 ghal, Haokun Wang, and Jin Ryong Kim (University of Texas at Dallas)\n----
 -----------------\nThe Malleable-Self Experience: Transforming Body Image 
 by Integrating Visual and Whole-body Haptic Stimuli\n\nThe Malleable-Self 
 Experience seamlessly integrates VR visuals and Synesthesia X1 whole-body 
 haptic sensations to induce a malleable perception of one's body image. We
  use integrated visuo-haptic compositions in a particular sequence of step
 s to establish and maintain body ownership of a virtual bod...\n\n\nTanner
  Person (Keio University Graduate School of Media Design); Nobuhisa Hanami
 tsu (Enhance Experience Inc., Keio University Graduate School of Media Des
 ign); Danny Hynds and Sohei Wakisaka (Keio University Graduate School of M
 edia Design); Kota Isobe and Leonard Mochizuki (Enhance Experience Inc.); 
 Tetsuya Mizuguchi (Enhance Experience Inc., Keio University Graduate Schoo
 l of Media Design); and Kouta Minamizawa (Keio University Graduate School 
 of Media Design)\n---------------------\nDynamic Acousto-Caustics in Dual-
 optimized Holographic Fields\n\nWe introduce “Dynamic Acousto-Caustics” a 
 method that merges acoustofluidics with optics to dynamically manipulate l
 ight caustics by controlling the shape of liquid surfaces. Using computer-
 controlled acoustic fields to deform the surface of a liquid medium, we ge
 nerate dynamic light beh...\n\n\nKoki Nagakura, Tatsuki Fushimi, Ayaka Tsu
 tsui, and Yoichi Ochiai (University of Tsukuba, Research and Development C
 enter for Digital Nature)\n---------------------\nMiruoto: Sports Event At
 mosphere Visual Rendering Through Real-time Image and Sound Processing Sys
 tem\n\nDid you already imagine how it would be to watch a sporting match w
 ithout sounds? We developed a system that displays in real-time onomatopoe
 ia, representing sounds during a sports event for hearing impaired and hea
 ring public. In this way, they can enjoy manga-like onomatopoeia overlay o
 n the match...\n\n\nGuillaume Gourmelen (Waseda Research Institute for Sci
 ence and Engineering, IWATA Lab); Shutaro Toriya and Eiko Miya (Waseda Res
 earch Institute for Science and Engineering); Naohisa Shioura (AISIN); and
  Hiroyasu Iwata (Waseda Research Institute for Science and Engineering)\n-
 --------------------\nVolumetric Display With Dual-path Holographic Laser 
 Rendering\n\nWe developed a volumetric display that achieves to render pal
 m-sized volumetric contents in real space. A single volume was formed by t
 he cooperative dual-path laser rendering based on the optimal design of th
 e optical system and scanning path. Graphics and animations with well-fill
 ed voxels were de...\n\n\nKota Kumagai (Utsunomiya University, Center for 
 Optical Research and Education); Hisashi Oka, Kazuki Horikiri, and Tetsuji
  Suzuki (JVCKENWOOD Corporation, Future Creation Research Laboratory); and
  Yoshio Hayasaki (Utsunomiya University, Center for Optical Research and E
 ducation)\n---------------------\nSkillPicker: Tweezers for Recording and 
 Training Dexterous Operations\n\nWe propose a tweezers-like device named S
 killPicker to facilitate unskilled experimenters to learn precise control 
 of tweezers through presenting visualized pinch force and magnified image 
 of the tip of tweezers.\n\n\nNoriyasu Obushi (University of Tokyo), Shohei
  Oshiro (Nara Institute of Science and Technology), Yusei Imai (Tokyo Univ
 ersity of Science), Chikayoshi Matsudaira (University of Tokyo), Saku Kiji
 ma (National Institute of Advanced Industrial Science and Technology), Tak
 ehiko Kanazawa (National Institute for Basic Biology), and Hirokazu Tsukay
 a and Masahiko Inami (University of Tokyo)\n\nRegistration Category: Full 
 Conference, Full Conference Supporter, Experience, Exhibitor Full Conferen
 ce, Exhibitor Experience
END:VEVENT
END:VCALENDAR
