59 research outputs found

    Multi-set canonical correlation analysis for 3D abnormal gait behaviour recognition based on virtual sample generation

    Get PDF
    Small sample dataset and two-dimensional (2D) approach are challenges to vision-based abnormal gait behaviour recognition (AGBR). The lack of three-dimensional (3D) structure of the human body causes 2D based methods to be limited in abnormal gait virtual sample generation (VSG). In this paper, 3D AGBR based on VSG and multi-set canonical correlation analysis (3D-AGRBMCCA) is proposed. First, the unstructured point cloud data of gait are obtained by using a structured light sensor. A 3D parametric body model is then deformed to fit the point cloud data, both in shape and posture. The features of point cloud data are then converted to a high-level structured representation of the body. The parametric body model is used for VSG based on the estimated body pose and shape data. Symmetry virtual samples, pose-perturbation virtual samples and various body-shape virtual samples with multi-views are generated to extend the training samples. The spatial-temporal features of the abnormal gait behaviour from different views, body pose and shape parameters are then extracted by convolutional neural network based Long Short-Term Memory model network. These are projected onto a uniform pattern space using deep learning based multi-set canonical correlation analysis. Experiments on four publicly available datasets show the proposed system performs well under various conditions

    Vision-based gait impairment analysis for aided diagnosis

    Get PDF
    Gait is a firsthand reflection of health condition. This belief has inspired recent research efforts to automate the analysis of pathological gait, in order to assist physicians in decision-making. However, most of these efforts rely on gait descriptions which are difficult to understand by humans, or on sensing technologies hardly available in ambulatory services. This paper proposes a number of semantic and normalized gait features computed from a single video acquired by a low-cost sensor. Far from being conventional spatio-temporal descriptors, features are aimed at quantifying gait impairment, such as gait asymmetry from several perspectives or falling risk. They were designed to be invariant to frame rate and image size, allowing cross-platform comparisons. Experiments were formulated in terms of two databases. A well-known generalpurpose gait dataset is used to establish normal references for features, while a new database, introduced in this work, provides samples under eight different walking styles: one normal and seven impaired patterns. A number of statistical studies were carried out to prove the sensitivity of features at measuring the expected pathologies, providing enough evidence about their accuracy

    Measuring Oscillating Walking Paths with a LIDAR

    Get PDF
    This work describes the analysis of different walking paths registered using a Light Detection And Ranging (LIDAR) laser range sensor in order to measure oscillating trajectories during unsupervised walking. The estimate of the gait and trajectory parameters were obtained with a terrestrial LIDAR placed 100 mm above the ground with the scanning plane parallel to the floor to measure the trajectory of the legs without attaching any markers or modifying the floor. Three different large walking experiments were performed to test the proposed measurement system with straight and oscillating trajectories. The main advantages of the proposed system are the possibility to measure several steps and obtain average gait parameters and the minimum infrastructure required. This measurement system enables the development of new ambulatory applications based on the analysis of the gait and the trajectory during a walk

    Measuring Gait Using a Ground Laser Range Sensor

    Get PDF
    This paper describes a measurement system designed to register the displacement of the legs using a two-dimensional laser range sensor with a scanning plane parallel to the ground and extract gait parameters. In the proposed methodology, the position of the legs is estimated by fitting two circles with the laser points that define their contour and the gait parameters are extracted applying a step-line model to the estimated displacement of the legs to reduce uncertainty in the determination of the stand and swing phase of the gait. Results obtained in a range up to 8 m shows that the systematic error in the location of one static leg is lower than 10 mm with and standard deviation lower than 8 mm; this deviation increases to 11 mm in the case of a moving leg. The proposed measurement system has been applied to estimate the gait parameters of six volunteers in a preliminary walking experiment

    Nonlinear predictive threshold model for real-time abnormal gait detection

    Get PDF
    Falls are critical events for human health due to the associated risk of physical and psychological injuries. Several fall related systems have been developed in order to reduce injuries. Among them, fall-risk prediction systems are one of the most promising approaches, as they strive to predict a fall before its occurrence. A category of fall-risk prediction systems evaluates balance and muscle strength through some clinical functional assessment tests, while other prediction systems investigate the recognition of abnormal gait patterns to predict a fall in real-time. The main contribution of this paper is a nonlinear model of user gait in combination with a threshold-based classification in order to recognize abnormal gait patterns with low complexity and high accuracy. In addition, a dataset with realistic parameters is prepared to simulate abnormal walks and to evaluate fall prediction methods. The accelerometer and gyroscope sensors available in a smartphone have been exploited to create the dataset. The proposed approach has been implemented and compared with the state-of-the-art approaches showing that it is able to predict an abnormal walk with a higher accuracy (93.5%) and a higher efficiency (up to 3.5 faster) than other feasible approaches

    Automatic Temporal Location and Classification of Human Actions Based on Optical Features

    Full text link
    Abstract—This paper presents a method for automatic temporal location and recognition of human actions. The data are obtained from a motion capture system. They are then animated and optical flow vectors are subsequently calculated. The system performs in two phases. The first phase employs nearest neighbor search to locate an action along the temporal axis taking into account both the angle and length of the vectors, while the second classifies the action using artificial neural networks. Principal Component Analysis (PCA) plays a significant role in discarding correlated flow vectors. We perform a statistical analysis in order to achieve an efficient, adaptive and targeted PCA. This will greatly improve the configuration of flow vectors which we have used to train both the locating and classifying systems. Experimental results confirm the significance of our proposed method for locating and classifying a specific action from among a sequential combination of actions. Keywords-temporal location; classification; human actions; neural networks; principal component analysis. I

    Human identification from video using advanced gait recognition techniques

    Get PDF
    The solutions proposed in this thesis contribute to improve gait recognition performance in practical scenarios that further enable the adoption of gait recognition into real world security and forensic applications that require identifying humans at a distance. Pioneering work has been conducted on frontal gait recognition using depth images to allow gait to be integrated with biometric walkthrough portals. The effects of gait challenging conditions including clothing, carrying goods, and viewpoint have been explored. Enhanced approaches are proposed on segmentation, feature extraction, feature optimisation and classification elements, and state-of-the-art recognition performance has been achieved. A frontal depth gait database has been developed and made available to the research community for further investigation. Solutions are explored in 2D and 3D domains using multiple images sources, and both domain-specific and independent modality gait features are proposed

    Human action recognition using spatial-temporal analysis.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.In the past few decades’ human action recognition (HAR) from video has gained a lot of attention in the computer vision domain. The analysis of human activities in videos span a variety of applications including security and surveillance, entertainment, and the monitoring of the elderly. The task of recognizing human actions in any scenario is a difficult and complex one which is characterized by challenges such as self-occlusion, noisy backgrounds and variations in illumination. However, literature provides various techniques and approaches for action recognition which deal with these challenges. This dissertation focuses on a holistic approach to the human action recognition problem with specific emphasis on spatial-temporal analysis. Spatial-temporal analysis is achieved by using the Motion History Image (MHI) approach to solve the human action recognition problem. Three variants of MHI are investigated, these are: Original MHI, Modified MHI and Timed MHI. An MHI is a single image describing a silhouettes motion over a period of time. Brighter pixels in the resultant MHI show the most recent movement/motion. One of the key problems of MHI is that it is not easy to know the conditions needed to obtain an MHI silhouette that will result in a high recognition rate for action recognition. These conditions are often neglected and thus pose a problem for human action recognition systems as they could affect their overall performance. Two methods are proposed to solve the human action recognition problem and to show the conditions needed to obtain high recognition rates using the MHI approach. The first uses the concept of MHI with the Bag of Visual Words (BOVW) approach to recognize human actions. The second approach combines MHI with Local Binary Patterns (LBP). The Weizmann and KTH datasets are then used to validate the proposed methods. Results from experiments show promising recognition rates when compared to some existing methods. The BOVW approach used in combination with the three variants of MHI achieved the highest recognition rates compared to the LBP method. The original MHI method resulted in the highest recognition rate of 87% on the Weizmann dataset and an 81.6% recognition rate is achieved on the KTH dataset using the Modified MHI approach
    • …
    corecore