28,607 research outputs found

    Silhouette-based gait recognition using Procrustes shape analysis and elliptic Fourier descriptors

    Get PDF
    This paper presents a gait recognition method which combines spatio-temporal motion characteristics, statistical and physical parameters (referred to as STM-SPP) of a human subject for its classification by analysing shape of the subject's silhouette contours using Procrustes shape analysis (PSA) and elliptic Fourier descriptors (EFDs). STM-SPP uses spatio-temporal gait characteristics and physical parameters of human body to resolve similar dissimilarity scores between probe and gallery sequences obtained by PSA. A part-based shape analysis using EFDs is also introduced to achieve robustness against carrying conditions. The classification results by PSA and EFDs are combined, resolving tie in ranking using contour matching based on Hu moments. Experimental results show STM-SPP outperforms several silhouette-based gait recognition methods

    Learning gender from human gaits and faces

    Get PDF
    Computer vision based gender classification is an important component in visual surveillance systems. In this paper, we investigate gender classification from human gaits in image sequences, a relatively understudied problem. Moreover, we propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of measurements, to fuse the two modalities at the feature level. Experiments demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2 % in large datasets. In this paper, we investigate gender classification from human gaits in image sequences using machine learning methods. Considering each modality, face or gait, in isolation has its inherent weakness and limitations, we further propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of signals, to fuse the two modalities at the feature level. Experiments on large dataset demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2%. We plot in Figure 1 the flow chart of our multimodal gender recognition system. 1

    Gait Recognition from Motion Capture Data

    Full text link
    Gait recognition from motion capture data, as a pattern classification discipline, can be improved by the use of machine learning. This paper contributes to the state-of-the-art with a statistical approach for extracting robust gait features directly from raw data by a modification of Linear Discriminant Analysis with Maximum Margin Criterion. Experiments on the CMU MoCap database show that the suggested method outperforms thirteen relevant methods based on geometric features and a method to learn the features by a combination of Principal Component Analysis and Linear Discriminant Analysis. The methods are evaluated in terms of the distribution of biometric templates in respective feature spaces expressed in a number of class separability coefficients and classification metrics. Results also indicate a high portability of learned features, that means, we can learn what aspects of walk people generally differ in and extract those as general gait features. Recognizing people without needing group-specific features is convenient as particular people might not always provide annotated learning data. As a contribution to reproducible research, our evaluation framework and database have been made publicly available. This research makes motion capture technology directly applicable for human recognition.Comment: Preprint. Full paper accepted at the ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), special issue on Representation, Analysis and Recognition of 3D Humans. 18 pages. arXiv admin note: substantial text overlap with arXiv:1701.00995, arXiv:1609.04392, arXiv:1609.0693

    Gait Recognition By Walking and Running: A Model-Based Approach

    No full text
    Gait is an emerging biometric for which some techniques, mainly holistic, have been developed to recognise people by their walking patterns. However, the possibility of recognising people by the way they run remains largely unexplored. The new analytical model presented in this paper is based on the biomechanics of walking and running, and will serve as the foundation of an automatic person recognition system that is invariant to these distinct gaits. A bilateral and dynamically coupled oscillator is the key concept underlying this work. Analysis shows that this new model can be used to automatically describe walking and running subjects without parameter selection. Temporal template matching that takes into account the whole sequence of a gait cycle is applied to extract the angles of thigh and lower leg rotation. The phase-weighted magnitudes of the lower order Fourier components of these rotations form the gait signature. Classification of walking and running subjects is performed using the k-nearest-neighbour classifier. Recognition rates are similar to that achieved by other techniques with a similarly sized database. Future work will investigate feature set selection to improve the recognition rate and will determine the invariance attributes, for inter- and intra- class, of both walking and running

    Multi-view gait recognition on curved

    Get PDF
    Appearance changes due to viewing angle changes cause difficulties for most of the gait recognition methods. In this paper, we propose a new approach for multi-view recognition, which allows to recognize people walking on curved paths. The recognition is based on 3D angular analysis of the movement of the walking human. A coarse-to-fine gait signature represents local variations on the angular measurements along time. A Support Vector Machine is used for classifying, and a sliding temporal window for majority vote policy is used to smooth and reinforce the classification results. The proposed approach has been experimentally validated on the publicly available “Kyushu University 4D Gait Database”

    Parkinson disease gait classification based on machine learning approach / Hany Hazfiza Manap

    Get PDF
    The aim of this thesis is to develop a Parkinson gait recognition technique that is able to evaluate and distinguish gait deviations experienced by Parkinson Disease (PD) patients from normal pattern. The research can be divided into two phase namely gait analysis of PD as compared to normal subjects, followed by gait classification using machine learning approach. Firstly, two types of statistical test are conducted which are independent t-test and Pearson’s correlation test. Raw gait database which consist of four basic gait features, five kinetic gait features and also twelve kinematic gait features are acquired from prior walking experiments of both PD and normal subjects. Based on statistical analysis conducted, significant different between PD and normal gait pattern are observed for four features, which are the step length and walking speed from basic features, maximum extension of hip from kinematic feature and maximum horizontal push-off force from kinetic feature. Hence these significant features are appropriate to be utilized for recognition of PD gait. Next, Principal Component Analysis (PCA) is used as feature extraction for each gait features from basic, kinetic and kinematic parameter followed by normalization based on intragroup as well as inter-group. To evaluate the effectiveness of each gait features category, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Naive Bayes classifier (NBC) are chosen as classifier. Results obtained demonstrated that for ANN classifier, fusion of basic and kinematic gait features due to intra-group normalization attained performance with 100% of accuracy outperformed others. As for SVM with polynomial kernel function, the finest performance specifically 100% accuracy is attained based on basic gait features from intra-group normalization whilst NBC achieved the best accuracy of 93.75% due to fusion of kinetic and kinematic gait features with intra-group normalization. Overall, the results obtained proven the ability of the machine classifiers in classifying gait pattern of PD from normal gait pattern with basic spatiotemporal seems to be the most reliable feature for this purpose due to its superb performance that achieved during classification by the three classifiers

    Spatio-temporal alignment and hyperspherical radon transform for 3D gait recognition in multi-view environments

    Get PDF
    This paper presents a view-invariant approach to gait recognition in multi-camera scenarios exploiting a joint spatio-temporal data representation and analysis. First, multi-view information is employed to generate a 3D voxel reconstruction of the scene under study. The analyzed subject is tracked and its centroid and orientation allow recentering and aligning the volume associated to it, thus obtaining a representation invariant to translation, rotation and scaling. Temporal periodicity of the walking cycle is extracted to align the input data in the time domain. Finally, Hyperspherical Radon Transform is presented as an efficient tool to obtain features from spatio-temporal gait templates for classification purposes. Experimental results prove the validity and robustness of the proposed method for gait recognition tasks with several covariates.Postprint (published version

    Robust gait recognition under variable covariate conditions

    Get PDF
    PhDGait is a weak biometric when compared to face, fingerprint or iris because it can be easily affected by various conditions. These are known as the covariate conditions and include clothing, carrying, speed, shoes and view among others. In the presence of variable covariate conditions gait recognition is a hard problem yet to be solved with no working system reported. In this thesis, a novel gait representation, the Gait Flow Image (GFI), is proposed to extract more discriminative information from a gait sequence. GFI extracts the relative motion of body parts in different directions in separate motion descriptors. Compared to the existing model-free gait representations, GFI is more discriminative and robust to changes in covariate conditions. In this thesis, gait recognition approaches are evaluated without the assumption on cooperative subjects, i.e. both the gallery and the probe sets consist of gait sequences under different and unknown covariate conditions. The results indicate that the performance of the existing approaches drops drastically under this more realistic set-up. It is argued that selecting the gait features which are invariant to changes in covariate conditions is the key to developing a gait recognition system without subject cooperation. To this end, the Gait Entropy Image (GEnI) is proposed to perform automatic feature selection on each pair of gallery and probe gait sequences. Moreover, an Adaptive Component and Discriminant Analysis is formulated which seamlessly integrates the feature selection method with subspace analysis for fast and robust recognition. Among various factors that affect the performance of gait recognition, change in viewpoint poses the biggest problem and is treated separately. A novel approach to address this problem is proposed in this thesis by using Gait Flow Image in a cross view gait recognition framework with the view angle of a probe gait sequence unknown. A Gaussian Process classification technique is formulated to estimate the view angle of each probe gait sequence. To measure the similarity of gait sequences across view angles, the correlation of gait sequences from different views is modelled using Canonical Correlation Analysis and the correlation strength is used as a similarity measure. This differs from existing approaches, which reconstruct gait features in different views through 2D view transformation or 3D calibration. Without explicit reconstruction, the proposed method can cope with feature mis-match across view and is more robust against feature noise

    Automatic and real-time locomotion mode recognition of a humanoid robot

    Get PDF
    Real-time locomotion mode recognition can potentially be applied in the gait analysis as a diagnostic tool or a strategy to control the robotic motion. This research aimed the development of an automatic, accurate and time-effective tool to recognize, in real-time, the locomotion mode that is being performed by a humanoid robot. The proposed strategy should also be general to different walkers and walking conditions. For these purposes, we designed a strategy to identify, in an offline phase, the suitable features and classification models for the real-time recognition. We explored several classification models based on two machine learning approaches using the features previously selected by principal component analysis and genetic algorithm (GA). The validation was carried out for distinct walking directions and speeds of DARwIn-OP. The offline analysis suggests that the most skilled models are the ones created by weighted k-nearest neighbors (KNN), fine KNN, and cubic support vector machine using 2 features selected by GA. Results from the real-time implementation highlight that weighted KNN exhibits a higher recognition performance (accuracy > 99.15%) and a lower elapsed time in the recognition process (89 ms) comparatively to the state-of-the-art. The proposed recognition tool showed to be cost-effective, and highly accurate for the real-time gait analysis at different walking conditions.- (POCI
    corecore