7 research outputs found

    A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    Get PDF
    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach

    Efficient Estimation of Human Upper Body Pose in Static Depth Images

    Get PDF
    Automatic estimation of human pose has long been a goal of computer vision, to which a solution would have a wide range of applications. In this paper we formulate the pose estimation task within a regression and Hough voting framework to predict 2D joint locations from depth data captured by a consumer depth camera. In our approach the offset from each pixel to the location of each joint is predicted directly using random regression forests. The predictions are accumulated in Hough images which are treated as likelihood distributions where maxima correspond to joint location hypotheses. Our approach is evaluated on a publicly available dataset with good results. © Springer-Verlag Berlin Heidelberg 2013

    A LITERATURE STUDY ON HUMAN MOTION ANALYSIS USING DEPTH IMAGERY

    Get PDF
    Analysis of human behavior through visual information is a highly active research topic in the computer vision community. This analysis is achieved in the literature via images from the conventional cameras; however recently depth sensors are used to obtain new type of images known as depth images. This human motion analysis can be widely applied to various domains, such as security surveillance in public spaces, shopping centers and airports. Home care for elderly people and children can use live video streaming from an integrated home monitoring system to prompt timely assistance. Moreover, automatic human motion analysis can be used in Human–Computer/Robot Interaction (HCI/HRI), video retrieval, virtual reality, computer gaming and many other fields. Human motion analysis using a depth sensor is still a new research area. Most work is focused on motion capture of articulated body skeletons. However, the research community is showing interest in higher level action related research. This report explains the advantages of depth imagery and then describes the new categories of depth sensors such as Microsoft Kinect that are available to obtain depth images. High-resolution real-time depth images are cheaply available because of tools like Microsoft Kinect. The main published research on the use of depth imagery for analyzing human activity is reviewed. A growing research area is the recognition of human actions and hence the existing work focuses mainly on body part detection and pose estimation. The publicly available datasets that include depth imagery are listed in this report, and also the software libraries that are available for the depth sensors are explained.  With the development of depth sensors, an increasing number of algorithms have employed depth data in vision-based human action recognition. The increasing availability of depth sensors is broadening the scope for future research. This reports provides an overview of this emerging field followed by various vision based algorithms used for human motion analysis

    Selective joint motion recognition using multi sensor for salat learning

    Get PDF
    Over the past few years, there has been significant attention given on motion recognition in computer vision as it has a wide range of potential applications that can be further developed. Hence, a wide variety of algorithms and techniques has been proposed to develop human motion recognition systems for the benefit of the human. Salat—an essential ritual in Muslim daily life which helps them be good Muslims—is not solely about the spiritual act, but it also involves the physical movements in which it has to be done according to its code of conduct. The existing motion recognition proposed for computing applications for salat movement is unsuitable as the movement in salat must be performed in accordance to the rules and procedures stipulated, the accuracy and sequence. In addition, tracking all skeleton joints does not contribute equally toward activity recognition as well as it is also computationally intensive. The current salat recognition focuses on recognizing main movements and it does not cover the whole cycle of salat activity. Besides, using a wearable sensor is not natural in performing salat since the user needs to give absolute concentration during salat activity. The research conducted was based on the intersections of technological development and Muslim spiritual practices. This study has been developed utilizing dual-sensor cameras and a special sensor prayer mat that has the ability to cooperate in recognizing salat movement and identifying the error in the movement. With the current technology in depth cameras and software development kits, human joint information is available to locate the joint position. Only important joints with the significant movement were selected to be tracked to perform real-time motion recognition. This selective joint algorithm is computationally efficient and offers good recognition accuracy in real-time. Once the features have been constructed, the Hidden Markov Model classifier was utilized to train and test the algorithm. The algorithm was tested on a purposely built dataset of depth videos recorded using a Kinect camera. This motion recognition system was designed based on the salat activity to recognize the user movement and his error rate, which will later be compared with the traditional tutor-based methodology. Subsequently, an evaluation comprising 25 participants was conducted utilizing usability testing methods. The experiment was conducted to evaluate the success score of the user’s salat movement recognition and error rate. Besides, user experience and subjective satisfaction toward the proposed system have been considered to evaluate user acceptance. The results showed that the evaluation of the proposed system was significantly different from the traditional tutor-based method evaluation. Results indicated a significant difference (p < 0.05) in success score and user’s error rate between the proposed system and traditional tutor-based methodology. This study also depicted that the proposed motion recognition system had successfully recognized salat movement and evaluated user error in salat activity, offering an alternative salat learning methodology. This motion identification system appears to offer an alternate learning process in a variety of study domains, not just salat movement activity

    A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    No full text
    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach

    Motion Tracking of Infants in Risk of Cerebral Palsy

    Get PDF
    corecore