9 research outputs found

    Acquiring 3D Full-body Motion from Noisy and Ambiguous Input

    Get PDF
    Natural human motion is highly demanded and widely used in a variety of applications such as video games and virtual realities. However, acquisition of full-body motion remains challenging because the system must be capable of accurately capturing a wide variety of human actions and does not require a considerable amount of time and skill to assemble. For instance, commercial optical motion capture systems such as Vicon can capture human motion with high accuracy and resolution while they often require post-processing by experts, which is time-consuming and costly. Microsoft Kinect, despite its high popularity and wide applications, does not provide accurate reconstruction of complex movements when significant occlusions occur. This dissertation explores two different approaches that accurately reconstruct full-body human motion from noisy and ambiguous input data captured by commercial motion capture devices. The first approach automatically generates high-quality human motion from noisy data obtained from commercial optical motion capture systems, eliminating the need for post-processing. The second approach accurately captures a wide variety of human motion even under significant occlusions by using color/depth data captured by a single Kinect camera. The common theme that underlies two approaches is the use of prior knowledge embedded in pre-recorded motion capture database to reduce the reconstruction ambiguity caused by noisy and ambiguous input and constrain the solution to lie in the natural motion space. More specifically, the first approach constructs a series of spatial-temporal filter bases from pre-captured human motion data and employs them along with robust statistics techniques to filter noisy motion data corrupted by noise/outliers. The second approach formulates the problem in a Maximum a Posterior (MAP) framework and generates the most likely pose which explains the observations as well as consistent with the patterns embedded in the pre-recorded motion capture database. We demonstrate the effectiveness of our approaches through extensive numerical evaluations on synthetic data and comparisons against results created by commercial motion capture systems. The first approach can effectively denoise a wide variety of noisy motion data, including walking, running, jumping and swimming while the second approach is shown to be capable of accurately reconstructing a wider range of motions compared with Microsoft Kinect

    Articulated human tracking and behavioural analysis in video sequences

    Get PDF
    Recently, there has been a dramatic growth of interest in the observation and tracking of human subjects through video sequences. Arguably, the principal impetus has come from the perceived demand for technological surveillance, however applications in entertainment, intelligent domiciles and medicine are also increasing. This thesis examines human articulated tracking and the classi cation of human movement, rst separately and then as a sequential process. First, this thesis considers the development and training of a 3D model of human body structure and dynamics. To process video sequences, an observation model is also designed with a multi-component likelihood based on edge, silhouette and colour. This is de ned on the articulated limbs, and visible from a single or multiple cameras, each of which may be calibrated from that sequence. Second, for behavioural analysis, we develop a methodology in which actions and activities are described by semantic labels generated from a Movement Cluster Model (MCM). Third, a Hierarchical Partitioned Particle Filter (HPPF) was developed for human tracking that allows multi-level parameter search consistent with the body structure. This tracker relies on the articulated motion prediction provided by the MCM at pose or limb level. Fourth, tracking and movement analysis are integrated to generate a probabilistic activity description with action labels. The implemented algorithms for tracking and behavioural analysis are tested extensively and independently against ground truth on human tracking and surveillance datasets. Dynamic models are shown to predict and generate synthetic motion, while MCM recovers both periodic and non-periodic activities, de ned either on the whole body or at the limb level. Tracking results are comparable with the state of the art, however the integrated behaviour analysis adds to the value of the approach.Overseas Research Students Awards Scheme (ORSAS

    3D Human Motion Tracking and Pose Estimation using Probabilistic Activity Models

    Get PDF
    This thesis presents work on generative approaches to human motion tracking and pose estimation where a geometric model of the human body is used for comparison with observations. The existing generative tracking literature can be quite clearly divided between two groups. First, approaches that attempt to solve a difficult high-dimensional inference problem in the body model’s full or ambient pose space, recovering freeform or unknown activity. Second, approaches that restrict inference to a low-dimensional latent embedding of the full pose space, recovering activity for which training data is available or known activity. Significant advances have been made in each of these subgroups. Given sufficiently rich multiocular observations and plentiful computational resources, highdimensional approaches have been proven to track fast and complex unknown activities robustly. Conversely, low-dimensional approaches have been able to support monocular tracking and to significantly reduce computational costs for the recovery of known activity. However, their competing advantages have – although complementary – remained disjoint. The central aim of this thesis is to combine low- and high-dimensional generative tracking techniques to benefit from the best of both approaches. First, a simple generative tracking approach is proposed for tracking known activities in a latent pose space using only monocular or binocular observations. A hidden Markov model (HMM) is used to provide dynamics and constrain a particle-based search for poses. The ability of the HMM to classify as well as synthesise poses means that the approach naturally extends to the modelling of a number of different known activities in a single joint-activity latent space. Second, an additional low-dimensional approach is introduced to permit transitions between segmented known activity training data by allowing particles to move between activity manifolds. Both low-dimensional approaches are then fairly and efficiently combined with a simultaneous high-dimensional generative tracking task in the ambient pose space. This combination allows for the recovery of sequences containing multiple known and unknown human activities at an appropriate (dynamic) computational cost. Finally, a rich hierarchical embedding of the ambient pose space is investigated. This representation allows inference to progress from a single full-body or global non-linear latent pose space, through a number of gradually smaller part-based latent models, to the full ambient pose space. By preserving long-range correlations present in training data, the positions of occluded limbs can be inferred during tracking. Alternatively, by breaking the implied coordination between part-based models novel activity combinations, or composite activity, may be recovered

    Human perception capabilities for socially intelligent domestic service robots

    Get PDF
    The daily living activities for an increasing number of frail elderly people represent a continuous struggle both for them as well as for their extended families. These people have difficulties coping at home alone but are still sufficiently fit not to need the round-the-clock care provided by a nursing home. Their struggle can be alleviated by the deployment of a mechanical helper in their home, i.e. a service robot that can execute a range of simple object manipulation tasks. Such a robotic application promises to extend the period of independent home living for elderly people, while providing them with a better quality of life. However, despite the recent technological advances in robotics, there are still some remaining challenges, mainly related to the human factors. Arguably, the lack of consistently dependable human detection, localisation, position and pose tracking information and insufficiently refined processing of sensor information makes the close range physical interaction between a robot and a human a high-risk task. The work described in this thesis addresses the deficiencies in the processing of the human information of today’s service robots. This is achieved through proposing a new paradigm for the robot’s situational awareness in regard to people as well as a collection of methods and techniques, operating at the lower levels of the paradigm, i.e. perception of new human information. The collection includes methods for obtaining and processing of information about the presence, location and body pose of the people. In addition to the availability of reliable human perception information, the integration between the separate levels of paradigm is considered to be a critically important factor for achieving the human-aware control of the robot. Improving the cognition, judgment and decision making action links between the paradigm’s layers leads to enhanced capability of the robot to engage in a natural and more meaningful interaction with people and, therefore, to a more enjoyable user experience. Therefore, the proposed paradigm and methodology are envisioned to contribute to making the prolonged assisted living of elderly people at home a more feasible and realistic task. In particular, this thesis proposes a set of methods for human presence detection, localisation and body pose tracking that are operating on the perception level of the paradigm. Also, the problem of having only limited visibility of a person from the on-board sensors of the robot is addressed by the proposed classifier fusion method that combines information from several types of sensors. A method for improved real-time human body pose tracking is also investigated. Additionally, a method for estimation of the multiple human tracks from noisy detections, as well as analysis of the computed human tracks for cognition about the social interactions within the social group, operating at the comprehension level of the robot’s situational awareness paradigm, is proposed. Finally, at the human-aware planning layer, a method that utilises the human related information, generated by the perception and comprehension layers to compute a minimally intrusive navigation path to a target person within a human group, is proposed. This method demonstrates how the improved human perception capabilities of the robot, through its judgement activity, ii ABSTRACT can be utilised by the highest level of the paradigm, i.e. the decision making layer, to achieve user friendly human-robot interactions. Overall, the research presented in this work, drawing on recent innovation in statistical learning, data fusion and optimisation methods, improves the overall situational awareness of the robot in regard to people with the main focus placed on human sensing capabilities of service robots. The improved overall situational awareness of the robot regarding people, as defined by the proposed paradigm, enables more meaningful human-robot interactions
    corecore