14,557 research outputs found

    Gait analysis using a single depth camera

    Get PDF
    Abstract—Gait analysis is often used as part of the rehabilitation program for post-stoke recovery assessment. Since current optical diagnostic and patient assessment tools tend to be expensive and not portable, this paper proposes a novel marker-based tracking system using a single depth camera which provides a cost-effective solution suitable for home and clinic use. The proposed system can simultaneously generate motion patterns even within a complex background using the proposed geometric model-based algorithm and autonomously provide gait analysis results. The processed rehabilitation data can be accessed by cross-platform mobile devices using cloud-based services enabling emerging telerehabilitation practices. Experimental validation shows a good agreement with state-of-the-art non-portable and expensive industrial standards

    A depth camera motion analysis framework for tele-rehabilitation : motion capture and person-centric kinematics analysis

    Get PDF
    With increasing importance given to telerehabilitation, there is a growing need for accurate, low-cost, and portable motion capture systems that do not require specialist assessment venues. This paper proposes a novel framework for motion capture using only a single depth camera, which is portable and cost effective compared to most industry-standard optical systems, without compromising on accuracy. Novel signal processing and computer vision algorithms are proposed to determine motion patterns of interest from infrared and depth data. In order to demonstrate the proposed framework’s suitability for rehabilitation, we developed a gait analysis application that depends on the underlying motion capture sub-system. Each subject’s individual kinematics parameters, which are unique to that subject, are calculated and these are stored for monitoring individual progress of the clinical therapy. Experiments were conducted on 14 different subjects, 5 healthy and 9 stroke survivors. The results show very close agreement of the resulting relevant joint angles with a 12-camera based VICON system, a mean error of at most 1.75% in detecting gait events w.r.t the manually generated ground-truth, and significant performance improvements in terms of accuracy and execution time compared to a previous Kinect-based system

    2.5D multi-view gait recognition based on point cloud registration

    Get PDF
    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM

    Recurrent Attention Models for Depth-Based Person Identification

    Get PDF
    We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201

    Multi-set canonical correlation analysis for 3D abnormal gait behaviour recognition based on virtual sample generation

    Get PDF
    Small sample dataset and two-dimensional (2D) approach are challenges to vision-based abnormal gait behaviour recognition (AGBR). The lack of three-dimensional (3D) structure of the human body causes 2D based methods to be limited in abnormal gait virtual sample generation (VSG). In this paper, 3D AGBR based on VSG and multi-set canonical correlation analysis (3D-AGRBMCCA) is proposed. First, the unstructured point cloud data of gait are obtained by using a structured light sensor. A 3D parametric body model is then deformed to fit the point cloud data, both in shape and posture. The features of point cloud data are then converted to a high-level structured representation of the body. The parametric body model is used for VSG based on the estimated body pose and shape data. Symmetry virtual samples, pose-perturbation virtual samples and various body-shape virtual samples with multi-views are generated to extend the training samples. The spatial-temporal features of the abnormal gait behaviour from different views, body pose and shape parameters are then extracted by convolutional neural network based Long Short-Term Memory model network. These are projected onto a uniform pattern space using deep learning based multi-set canonical correlation analysis. Experiments on four publicly available datasets show the proposed system performs well under various conditions
    corecore