4 research outputs found

    Augmented Reality based 3D Human Hands Tracking from Monocular True Images Using Convolutional Neural Network

    Get PDF
    Precise modeling of hand tracking from monocular moving camera calibration parameters using semantic cues is an active area of research concern for the researchers due to lack of accuracy and computational overheads. In this context, deep learning based framework, i.e. convolutional neural network based human hands tracking as well as recognizing pose of hands in the current camera frame become active research problem. In addition, tracking based on monocular camera needs to be addressed due to updated technology such as Unity3D engine and other related augmented reality plugins. This research aims to track human hands in continuous frame by using the tracked points to draw 3D model of the hands as an overlay in the original tracked image. In the proposed methodology, Unity3D environment was used for localizing hand object in augmented reality (AR). Later, convolutional neural network was used to detect hand palm and hand keypoints based on cropped region of interest (ROI). Proposed method by this research achieved accuracy rate of 99.2% where single monocular true images were used for tracking. Experimental validation shows the efficiency of the proposed methodology.Peer reviewe

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe

    Augmented reality image generation with virtualized real objects using view-dependent texture and geometry

    Get PDF
    ISMAR 2013 : IEEE and ACM International Symposium on Mixed and Augmented Reality , Oct 1-4, 2013 , Adelaide, AustraliaAugmented reality (AR) images with virtualized real objects can be used for various applications. However, such AR image generation requires hand-crafted 3D models of that objects, which are usually not available. This paper proposes a view-dependent texture (VDT)- and view-dependent geometry (VDG)-based method for generating high quality AR images, which uses 3D models automatically reconstructed from multiple images. Since the quality of reconstructed 3D models is usually insufficient, the proposed method inflates the objects in the depth map as VDG to repair chipped object boundaries and assigns a color to each pixel based on VDT to reproduce the detail of the objects. Background pixel exposure due to inflation is suppressed by the use of the foreground region extracted from the input images. Our experimental results have demonstrated that the proposed method can successfully reduce above visual artifacts
    corecore