UVR Lab. was formed in Feb. 2001 at GIST to study and develop “Virtual Reality in Smart computing environments” that process multimodal input, perceive user’s intention and emotion, and respond to user’s request through Augmented Reality. Since 2012, UVR Lab moved to KAIST GSCT and restarted with a theme of “FUN in Ubiquitous VR.”
Hybrid 3D Hand Articulations Tracking Guided by Classification and Search Space Adaptation

  • Keywords
    Egocentric view, Volumetric selection, Distant manipulation, Gesture interaction, 3D user interface, Augmented Reality, Mixed Reality
     
  • Abstract
    We propose a novel method for model-based 3D tracking of hand articulations despite fast-moving hand postures in depth images. A large number of augmented reality (AR) and virtual reality (VR) research have used model-based approaches to estimate hand postures and tracking movements. However, these approaches have limitations if the hand moves quickly or outside from the camera’s field of view. To overcome the problems, researchers tried a hybrid strategy that uses multiple model initializations for 3D tracking of articulations. However, this strategy still has limitations. For example, in genetic optimization, the hypotheses generated from the previous solution may try to search for a solution in wrong search space in a fast-moving hand posture. This problem also occurs if the search space chosen from the results of a trained model does not cover the true solution even though it moves slowly. Our proposed method estimates the hand pose based on model-based tracking guided by classification and search space adaptation. From the classification by a convolutional neural network (CNN), a data-driven prior is included to the objective function and additional hypotheses are generated in particle swarm optimization (PSO). In addition, search spaces of the two sets of the hypotheses are adaptively updated using the distribution of each set of the hypotheses. We demonstrated the usefulness of the proposed method by applying it to an American Sign Language (ASL) dataset consisting of fast moving hand postures. The experimental results show that the proposed algorithm shows more accurate tracking results compared to other state-of-the-art tracking algorithms.
     
 
Smartwatch-assisted robust freehand virtual object manipulation in HMD-based augmented reality

  • Keywords
    Augmented reality, virtual object manipulation, 3D user interfaces, sensor fusion
     
  • Abstract
    We introduce a smartwatch assisted sensor fusion approach to robustly track 6-DOF hand movement in head mounted display (HMD) based augmented reality (AR) environment, which can be used for robust 3D object manipulation. Our method uses a wrist-worn smartwatch with HMD-mounted depth sensor to robustly track 3D position and orientation of user’s hand. We introduce HMD-based augmented reality platform with smartwatch, and method to accurately calibrate orientation between smartwatch and HMD. We also implement natural 3D object manipulating system using 6-DOF hand tracker with hand grasping detection. Our proposed system is easy to use, and doesn’t require any hand held devices.
     
 
Deep Estimation of Natural Illumination from a Single RGB-D Image

  • Keywords
    Light estimation, Deep learning, Computer graphics, Photo-realistic rendering, Augmented Reality
     
  • Abstract
    We propose a deep learning-based method to directly infer the natural illumination of high dynamic range (HDR) from a single and low dynamic range image captured by casual RGB-D camera in real-time. To provide more immersive experience to users in augmented reality environment, it is important to render virtual objects consistent with a real environment. For photo-realistic rendering in AR, it is necessary to estimate the natural illumination. However, the old-fashioned methods required additional cameras or much time to calculate the light condition. In this work, we structured the end-to-end network to estimate an HDR image of a distant light from a single LDR image with limited FoV.
     
 
Object Idenfication and Localization based on Monocular RGB Images for AR/VR Application

  • Keywords
    Object recognition and localization on AR/VR environment, Hand and Object tracking using CV, Physical Computing
     
  • Abstract
    We present an object detection and pose estimation framework integrated into a simultaneous localization and mapping (SLAM) system using an RGB camera. Visual SLAM is one of the key technologies to align the virtual and real world together in Augmented Reality applications. Visual SLAM approaches have shown their advantages in robustness and accuracy in recent years. However, there are still several challenges that simultaneous process object detection or pose estimation with Visual SLAM. We use novel method for detecting 3D model instances and estimating their 6D poses from RGB data in a initial frame for generating local coordinate on each objects. For this we extend the popular semantic segmentation paradigm to cover the full 6D pose space in proposed descriptors and train on synthetic model data only. Our approach competes or surpasses current state-of-the-art methods that leverage RGB- D data and SSD based Approach on multiple challenging datasets. Furthermore, our method produces these results at around 45 Hz, which is many times faster than the related methods.
     
 
Context-aware Risk Management for Architectural Heritage using VR and AR

  • Keywords
    HBIM, Cultural Heritage Management, Risk Management, Virtual Reality, Augmented Reality
     
  • Abstract
    To overcome problems which are scattered data and a lack of professionals in risk management work and a portability of HBIM (Historic Building Information Modeling) system, this research aims to create a risk management system for on-site and remote risk management using aumented reality (AR) and virtual reality (VR). In this research, we focused on design of the metadata and database structure that are based on the point of interest (PoI), anchor, and content metadata for advanced context-aware informtion retrieval in AR and VR environment. And we propose the system architecture for interoperating HBIM system with AR and VR application.
     
 
Authoring Personal Interpretation in a 3D Virtual Heritage Site to Enhance Visitor Engagement

  • Keywords
    Virtual heritage, Personal interpretation, 3D virtual heritage site, Engagement, Mixed Reality
     
  • Abstract
    Conventional approaches to a virtual heritage site that provide interpretations through curated content allow visitors only the lowest level of engagement, which is paying conscious, intentional attention to the content. In this paper, we propose a trajectory for visitor experience to a virtual heritage site that facilitates a higher level of engagement by letting visitors make their own interpretations. We developed a mobile virtual reality (VR) application that delivers this trajectory, which allows visitors to author mixed reality (MR) content that represents personal interpretations. The application provides three types of virtual assets to compose MR content: historical assets, emotional assets, and personal assets. We describe how visitors generally followed our trajectory and used virtual assets, engaging with virtual heritage and making interpretations. We related our findings to a discussion of how to support personal engagement and rich interpretation in a virtual heritage site.
     
 
Design Guidelines for a Location-based Digital Heritage Storytelling Tool to Support Author Intent

  • Keywords
    Digital heritage; Mixed Reality; Digital storytelling; Location-based MR storytelling; Authoring tool; Geotagged content
     
  • Abstract
    This paper proposes guidelines for the design of a Mixed Reality(MR) storytelling tool for cultural heritage sites that utilizes geotagged content and detailed location-specific narrative principles while prioritizing the intent of the author. Continuous efforts have been made to apply storytelling techniques in producing location-based digital heritage content over MR platforms. However, consideration for fine-tuned input parameters required at the authoring level has been consistently lacking. This has resulted in content that fails to identify, understand, and reflect the goals and needs of the author in maximizing the benefits of the narrative form for MR heritage experiences. To address this problem, we combine an analysis of existing location-based digital authoring tools with qualitative user studies conducted in a digital storytelling workshop. With the implications derived from our findings, we establish detailed design guidelines to provide a systematic narrative structure in the arrangement of geotagged content over various Points of Interest within the MR heritage space. Our study identifies two major authoring motivations for location-based MR storytelling: space-driven and story-driven. We thereby assert the need to bifurcate the design of the tool to support both these purposes and propose guidelines that differentiate the functions and task flow of each authoring mode.
     
 
Effect of Applying Film-induced Tourism to Virtual Reality Tour of Cultural Heritage Sites

  • Keywords
    Cultural heritage site, Virtual reality, Tourism storytelling, Film-induced tourism
     
  • Abstract
    Despite various efforts to provide visually immersive experiences and various information about virtual cultural heritage sites, few studies have investigated how to effectively provide media related to cultural heritage sites in virtual reality or to measure the effectiveness of the virtual tourism experience in improving the affect and visit intention to the actual tourist destination. In this study, we examined the effects of a video-based virtual tour, applied via film-induced tourism, on post-VR attitude and behavioral visit intention to a cultural heritage site compared with those of a basic virtual tour. We also aimed to determine how to effectively provide broadcast content as a storytelling medium in a virtual reality tour, considering characteristics of a cultural heritage site. Through the experiment, we proved that video-based virtual experience is more effective in improving visitors’ positive attitude and visit intention than is the basic virtual tour, which was nevertheless effective as well, and derived 7 design implications for the utilization of location-based broadcast video clips.