UVR Lab. was formed in Feb. 2001 at GIST to study and develop “Virtual Reality in Smart computing environments” that process multimodal input, perceive user’s intention and emotion, and respond to user’s request through Augmented Reality. Since 2012, UVR Lab moved to KAIST GSCT and restarted with a theme of “FUN in Ubiquitous VR.”
작성일 : 14-04-18
Multiple object recognition (work in progress)
 글쓴이 : UVR
조회 : 10,694  

본 연구는 안정적인 객체 인식을 지원하기 위해 계산학적으로 모델링된 사람의 시각인지 모형에 따라 진행한 객체인식 연구이다. 본 동영상에서 보여지는 결과는 해당 모형을 따르는 프로토타입 결과이다. Saliency map detection, 객체의 초기 확률 값과 contour로 검출된 ROI 사이즈들을 통한 1차 필터링 그리고 Ontology를 사용한 주변 객체와의 관계를 통한 2차 필터링 등으로 해당 모형의 프로세스를 따르지 않았을때와 비교하였을때 상대적으로 안정적인 결과를 나타내는것을 확인하였다. 앞으로 보다 다양한 실험및 안정적인 특징 추출 및 매칭 방법에 대한 구현이 이루어져야 한다. 아래는 제안하는 모형에 대한 문헌 조사를 포함하고 있는 논문의 초록이다.

Youngkyoon Jang, Woontack Woo, "Unified Visual Perception Model for Context-aware Augmented Reality", International Symposium on Mixed Augmented Reality 2013, Accepted for publication(DC program), Adelaide, S.A, Australia, Oct. 1-4, 2013.

We propose Unified Visual Perception Model (UVPM), which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics, brain research and psychological science. The proposed model consists of Working Memory (WM) in charge of low-level processing (in a bottomup manner), Long-Term Memory (LTM) and Short-Term Memory (STM), which are in charge of high-level processing (in a top-down manner). WM and LTM/STM are mutually complementary to increase recognition accuracies. By implementing the initial prototype of each boxes of the model, we could know that the proposed model works for stable object recognition. The proposed model is available to support context-aware AR with the optical see-through HMD.
Uploaded by UVRLAB on Aug. 13, 2013


ADD. (34141)KAIST N5 2325, 291 Daehak-ro, Yuseong-gu, Daejeon, Republic of Korea / TEL. +82-42-350-5923