UVR Lab. was formed in Feb. 2001 at GIST to study and develop “Virtual Reality in Smart computing environments” that process multimodal input, perceive user’s intention and emotion, and respond to user’s request through Augmented Reality. Since 2012, UVR Lab moved to KAIST GSCT and restarted with a theme of “FUN in Ubiquitous VR.”
User self-localization and camera tracking for on-site augmented reality

  • Keywords
    Camera pose estimation, camera tracking, outdoor augmented reality, on-site augmented reality
     
  • Abstract
    In this research, we propose the all-in-one framework for mobile augmented reality. The framework was designed to incorporate computer vision-based technology and 3D content visualization technology. Through this framework, we clearly explain how to create 3D visual data for camera pose estimation and how to connect AR content with a outdoor building. In addition, we suggest multi-threading camera tracking and pose estimation model for mobile AR application. Finally, we have confirmed the efficiency and reliability of our framework. Through this vision-based AR framework, seamless application for the a outdoor AR would be made.
     
 
Egocentric selection and msnipulation of a distant subspace for augmented space

  • Keywords
    Egocentric view, volumetric selection, distant manipulation, gesture interaction, 3D user interface, augmented reality, mixed reality
     
  • Abstract
    We propose a novel volumetric selection, which enables natural acquisition of subspace in an augmented scene from an egocentric view even for scenarios involving ambiguous center objects or object occlusion. In wearable augmented reality, approaching a three-dimensional region including the objects of interest has become more important than approaching distant objects one by one. However, existing ray-based volumetric selection through a head worn display accompanies difficulties in defining a desired three-dimensional region due to obstacles by occlusion and depth perception. The proposed method, called TunnelSlice, effectively determines a cuboid transform, excluding unnecessary areas of a user-defined tunnel via two-handed pinch based procedural slicing from an egocentric view. Through six scenarios involving central object status and different occlusion levels, we conducted a user study of TunnelSlice. Compared to two existing approaches, TunnelSlice was preferred by the subjects and showed greater stability for all scenarios, and outperformed the other approaches in a scenario involving strong occlusion without a central object. TunnelSlice is thus expected to serve as a key technology for standard interaction using a subspace in wearable augmented reality. Currently, we focus on an enhanced technique inspired by TunnelSlice for effective manipulation within a selected subspace.
     
 
Efficient 3D hand tracking in articulation subspaces for the manipulation of virtual objects

  • Keywords
    Computer vision, hand articulations, tracking, hand-based 3D interaction
     
  • Abstract
    We propose an efficient method for model-based 3D tracking of hand articulations observed from an egocentric viewpoint that aims at supporting the manipulation of virtual objects. Previous modelbased approaches optimize non-convex objective functions defined in the 26 Degrees of Freedom (DoFs) space of possible hand articulations. In our work, we decompose this space into six articulation subspaces (6 DoFs for the palm and 4 DoFs for each finger). We also label each finger with a Gaussian model that is propagated between successive image frames. As confirmed by a number of experiments, this divide-and-conquer approach tracks hand articulations more accurately than existing model-based approaches. At the same time, real time performance is achieved without the need of GPGPU processing. Additional experiments show that the proposed approach is preferable for supporting the accurate manipulation of virtual objects in VR/AR scenarios.
     
 
Smartwatch-assisted robust freehand virtual object manipulation in HMD-based augmented reality

  • Keywords
    Augmented reality, virtual object manipulation, 3D user interfaces, sensor fusion
     
  • Abstract
    We introduce a smartwatch assisted sensor fusion approach to robustly track 6-DOF hand movement in head mounted display (HMD) based augmented reality (AR) environment, which can be used for robust 3D object manipulation. Our method uses a wrist-worn smartwatch with HMD-mounted depth sensor to robustly track 3D position and orientation of user’s hand. We introduce HMD-based augmented reality platform with smartwatch, and method to accurately calibrate orientation between smartwatch and HMD. We also implement natural 3D object manipulating system using 6-DOF hand tracker with hand grasping detection. Our proposed system is easy to use, and doesn’t require any hand held devices.
     
 
Understanding hand-object manipulation using computer vision

  • Keywords
    Hand tracking, object recognition, hand gesture recognition, machine learning
     
  • Abstract
    Our goal is to automate the understanding of natural hand-object manipulation by developing computer vision- based techniques. Our hypothesis is that it is necessary to model the grasp types of hands and the attributes of manipulated objects in order to accurately recognize manipulation actions. Specifically, we focus on recognizing hand grasp types, object attributes and actions from a single image within an unified model. First, we explore the contextual relationship between grasp types and object attributes, and show how that context can be used to boost the recognition of both grasp types and object attributes. Second, we propose to model actions with grasp types and object attributes based on the hypothesis that grasp types and object attributes contain complementary information for characterizing different actions. Our proposed action model outperforms traditional appearance-based models which are not designed to take into account semantic constraints such as grasp types or object attributes.