Research Topics
Hybrid 3D Hand Articulations Tracking Guided by Classification and Search Space Adaptation
Egocentric view, Volumetric selection, Distant manipulation, Gesture interaction, 3D user interface, Augmented Reality, Mixed Reality
We propose a novel method for model-based 3D tracking of hand articulations despite fast-moving hand postures in depth images. A large number of augmented reality (AR) and virtual reality (VR) research have used model-based approaches to estimate hand postures and tracking movements. However, these approaches have limitations if the hand moves quickly or outside from the camera’s field of view. To overcome the problems, researchers tried a hybrid strategy that uses multiple model initializations for 3D tracking of articulations. However, this strategy still has limitations. For example, in genetic optimization, the hypotheses generated from the previous solution may try to search for a solution in wrong search space in a fast-moving hand posture. This problem also occurs if the search space chosen from the results of a trained model does not cover the true solution even though it moves slowly. Our proposed method estimates the hand pose based on model-based tracking guided by classification and search space adaptation. From the classification by a convolutional neural network (CNN), a data-driven prior is included to the objective function and additional hypotheses are generated in particle swarm optimization (PSO). In addition, search spaces of the two sets of the hypotheses are adaptively updated using the distribution of each set of the hypotheses. We demonstrated the usefulness of the proposed method by applying it to an American Sign Language (ASL) dataset consisting of fast moving hand postures. The experimental results show that the proposed algorithm shows more accurate tracking results compared to other state-of-the-art tracking algorithms.
Smartwatch-assisted robust freehand virtual object manipulation in HMD-based augmented reality
Augmented reality, virtual object manipulation, 3D user interfaces, sensor fusion
We introduce a smartwatch assisted sensor fusion approach to robustly track 6-DOF hand movement in head mounted display (HMD) based augmented reality (AR) environment, which can be used for robust 3D object manipulation. Our method uses a wrist-worn smartwatch with HMD-mounted depth sensor to robustly track 3D position and orientation of user’s hand. We introduce HMD-based augmented reality platform with smartwatch, and method to accurately calibrate orientation between smartwatch and HMD. We also implement natural 3D object manipulating system using 6-DOF hand tracker with hand grasping detection. Our proposed system is easy to use, and doesn’t require any hand held devices.
Projects
- Ongoing
Human reconstruction for telepresent interaction
Starting with Microsoft Holoportation and Facebook Spaces, research for tele-presence systems has been actively conducted for remote interaction between users. However, many prototype systems have limitations since they require unrealistic hardware configuration or have limited capability for providing tele-presence. In this project, necessary research topics will be studied and developed for reconstruction technology, which is a key factor for providing high tele-presence. Our system will be based on the commodity sensors that acquire 3-dimensional information of the human body such as Microsoft Kinect or Intel RealSense.
-Hand Tracking -Telepresence -Interaction with remote users.
(1) Body and facial reconstruction – To reconstruct a user’s body and facial mesh using the input image of RGB-D camera, and to extract the motion of the user through the skeleton extraction method based on deep learning. (2) Hand reconstruction – To reconstruct the user’s hand mesh and extract the motion of the hand based on the imput image of a small RGB-D camera attached to the AR headset, and to improve the precision of reconstruction and motion extraction using wrist-worn IMU sensor information. (3) Gaze tracking – To estimate the position of pupil and track user’s gaze using the input image of the near-infrared camera attached to the AR headset, and to detect facial expression based on feature points.
(1) Virtual classroom – can improve the presence and learning effect through the sight alignment between the instructor and the learner. (2) Tele-medicine – can provide medical services for remote patients. (3) Virtual personal training (PT) - can provide the observation of the instructor’s demonstration pose at the point of the view that the learner wants, and can be extended to the sport fields where the accuracy of the posture is important such as golf or yoga.
Projects
- Past
글로벌 스트리트뷰 및 공간정보 기반 360도 VR콘텐츠 저작도구 플랫폼 개발-과학기술정보통신부/정보통신·방송연구개발사업, 2019.07~2020.12
초실감 원격가상 인터랙션을 위한 사용자 복원 기술-미래창조과학부/차세대정보컴퓨팅기술개발사업, 2017.09~2020.12
Address. (34141)KAIST N5 #2325,
291 Daehak-ro, Yuseong-gu,
Daejeon,
Republic of Korea
Phone. +82-42-350-5923