945 research outputs found

    Human Motion Analysis: From Gait Modeling to Shape Representation and Pose Estimation

    Get PDF
    This dissertation presents a series of fundamental approaches to the human motion analysis from three perspectives, i.e., manifold learning-based gait motion modeling, articulated shape representation and efficient pose estimation. Firstly, a new joint gait-pose manifold (JGPM) learning algorithm is proposed to jointly optimize the gait and pose variables simultaneously. To enhance the representability and flexibility for complex motion modeling, we also propose a multi-layer JGPM that is capable of dealing with a variety of walking styles and various strides. We resort to a topologically-constrained Gaussian process latent variable model (GPLVM) to learn the multi-layer JGPM where two new techniques are introduced to facilitate model learning. First is training data diversification that creates a set of simulated motion data with different strides under limited data. Second is the topology-aware local learning that is to speed up model learning by taking advantage of the local topological structure. We demonstrate the effectiveness of our approach by synthesizing the high-quality motions from the multi-layer model. The experimental results show that the multi-layer JGPM outperforms several existing GPLVM-based models in terms of the overall performance of motion modeling.On the other hand, to achieve efficient human pose estimation from a single depth sensor, we develop a generalized Gaussian kernel correlation (GKC)-based framework which supports not only body shape modeling, but also articulated pose tracking. We first generalize GKC from the univariate Gaussian to the multivariate one and derive a unified GKC function that provides a continuous and differentiable similarity measure between a template and an observation, both of which are represented by a collection of univariate and/or multivariate Gaussian kernels. Then, to facilitate the data matching and accommodate articulated body deformation, we embed a quaternion-based articulated skeleton into a collection of multivariate Gaussians-based template model and develop an articulated GKC (AGKC) which supports subject-specific shape modeling and articulated pose tracking for both the full-body and hand. Our tracking algorithm is simple yet effective and computationally efficient. We evaluate our algorithm on two benchmark depth datasets. The experimental results are promising and competitive when compared with state-of-the-art algorithms.Electrical Engineerin

    Learning discriminative features for human motion understanding

    Get PDF
    Human motion understanding has attracted considerable interest in recent research for its applications to video surveillance, content-based search and healthcare. With different capturing methods, human motion can be recorded in various forms (e.g. skeletal data, video, image, etc.). Compared to the 2D video and image, skeletal data recorded by motion capture device contains full 3D movement information. To begin with, we first look into a gait motion analysis problem based on 3D skeletal data. We propose an automatic framework for identifying musculoskeletal and neurological disorders among older people based on 3D skeletal motion data. In this framework, a feature selection strategy and two new gait features are proposed to choose an optimal feature set from the input features to optimise classification accuracy. Due to self-occlusion caused by single shooting angle, 2D video and image are not able to record full 3D geometric information. Therefore, viewpoint variation dramatically affects the performance on lots of 2D based applications (e.g. arbitrary view action recognition and image-based 3D human shape reconstruction). Leveraging view-invariance from the 3D model is a popular idea to improve the performance on 2D computer vision problems. Therefore, in the second contribution, we adopt 3D models built with computer graphics technology to assist in solving the problem of arbitrary view action recognition. As a solution, a new transfer dictionary learning framework that utilises computer graphics technologies to synthesise realistic 2D and 3D training videos is proposed, which can project a real-world 2D video into a view-invariant sparse representation. In the third contribution, 3D models are utilised to build an end-to-end 3D human shape reconstruction system, which can recover the 3D human shape from a single image without any prior parametric model. In contrast to most existing methods that calculate 3D joint locations, the method proposed in this thesis can produce a richer and more useful point cloud based representation. Synthesised high-quality 2D images and dense 3D point clouds are used to train a CNN-based encoder and 3D regression module. It can be concluded that the methods introduced in this thesis try to explore human motion understanding from 3D to 2D. We investigate how to compensate for the lack of full geometric information in 2D based applications with view-invariance learnt from 3D models

    Human Pose Estimation from Monocular Images : a Comprehensive Survey

    Get PDF
    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problema into several modules: feature extraction and description, human body models, and modelin methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used

    Image Enhancement via Deep Spatial and Temporal Networks

    Get PDF
    Image enhancement is a classic problem in computer vision and has been studied for decades. It includes various subtasks such as super-resolution, image deblurring, rain removal and denoise. Among these tasks, image deblurring and rain removal have become increasingly active, as they play an important role in many areas such as autonomous driving, video surveillance and mobile applications. In addition, there exists connection between them. For example, blur and rain often degrade images simultaneously, and the performance of their removal rely on the spatial and temporal learning. To help generate sharp images and videos, in this thesis, we propose efficient algorithms based on deep neural networks for solving the problems of image deblurring and rain removal. In the first part of this thesis, we study the problem of image deblurring. Four deep learning based image deblurring methods are proposed. First, for single image deblurring, a new framework is presented which firstly learns how to transfer sharp images to realistic blurry images via a learning-to-blur Generative Adversarial Network (GAN) module, and then trains a learning-to-deblur GAN module to learn how to generate sharp images from blurry versions. In contrast to prior work which solely focuses on learning to deblur, the proposed method learns to realistically synthesize blurring effects using unpaired sharp and blurry images. Second, for video deblurring, spatio-temporal learning and adversarial training methods are used to recover sharp and realistic video frames from input blurry versions. 3D convolutional kernels on the basis of deep residual neural networks are employed to capture better spatio-temporal features, and train the proposed network with both the content loss and adversarial loss to drive the model to generate realistic frames. Third, the problem of extracting sharp image sequences from a single motion-blurred image is tackled. A detail-aware network is presented, which is a cascaded generator to handle the problems of ambiguity, subtle motion and loss of details. Finally, this thesis proposes a level-attention deblurring network, and constructs a new large-scale dataset including images with blur caused by various factors. We use this dataset to evaluate current deep deblurring methods and our proposed method. In the second part of this thesis, we study the problem of image deraining. Three deep learning based image deraining methods are proposed. First, for single image deraining, the problem of joint removal of raindrops and rain streaks is tackled. In contrast to most of prior works which solely focus on the raindrops or rain streaks removal, a dual attention-in-attention model is presented, which removes raindrops and rain streaks simultaneously. Second, for video deraining, a novel end-to-end framework is proposed to obtain the spatial representation, and temporal correlations based on ResNet-based and LSTM-based architectures, respectively. The proposed method can generate multiple deraining frames at a time, which outperforms the state-of-the-art methods in terms of quality and speed. Finally, for stereo image deraining, a deep stereo semantic-aware deraining network is proposed for the first time in computer vision. Different from the previous methods which only learn from pixel-level loss function or monocular information, the proposed network advances image deraining by leveraging semantic information and visual deviation between two views

    Exploring the Landscape of Ubiquitous In-home Health Monitoring: A Comprehensive Survey

    Full text link
    Ubiquitous in-home health monitoring systems have become popular in recent years due to the rise of digital health technologies and the growing demand for remote health monitoring. These systems enable individuals to increase their independence by allowing them to monitor their health from the home and by allowing more control over their well-being. In this study, we perform a comprehensive survey on this topic by reviewing a large number of literature in the area. We investigate these systems from various aspects, namely sensing technologies, communication technologies, intelligent and computing systems, and application areas. Specifically, we provide an overview of in-home health monitoring systems and identify their main components. We then present each component and discuss its role within in-home health monitoring systems. In addition, we provide an overview of the practical use of ubiquitous technologies in the home for health monitoring. Finally, we identify the main challenges and limitations based on the existing literature and provide eight recommendations for potential future research directions toward the development of in-home health monitoring systems. We conclude that despite extensive research on various components needed for the development of effective in-home health monitoring systems, the development of effective in-home health monitoring systems still requires further investigation.Comment: 35 pages, 5 figure

    Parallelization Strategies for Markerless Human Motion Capture

    Get PDF
    Markerless Motion Capture (MMOCAP) is the problem of determining the pose of a person from images captured by one or several cameras simultaneously without using markers on the subject. Evaluation of the solutions is frequently the most time-consuming task, making most of the proposed methods inapplicable in real-time scenarios. This paper presents an efficient approach to parallelize the evaluation of the solutions in CPUs and GPUs. Our proposal is experimentally compared on six sequences of the HumanEva-I dataset using the CMAES algorithm. Multiple algorithm’s configurations were tested to analyze the best trade-off in regard to the accuracy and computing time. The proposed methods obtain speedups of 8× in multi-core CPUs, 30× in a single GPU and up to 110× using 4 GPU
    • …
    corecore