12 research outputs found

    Identifcación de la marcha humana basada en Kinect bajo diferentes factores covariables

    Get PDF
    Introduction: Nowadays human gait identification/recognition is available in a variety of applications due to rapid advances in biometrics technology. This makes them easier to use for security and surveillance. Due to the rise in terrorist attacks during the last ten years research has focused on the biometric traits in these applications and they are now capable of recognising human beings from a distance. The main reason for my research interest in Gait biometrics is because it is unobtrusive and requires lower image/video quality compared to other biometric traits. Materials and Methods: In this paper we propose investigating Kinect-based gait recognition using non-standard gait sequences. This study examines different scenarios to highlight the challenges of non-standard gait sequences. Gait signatures are extracted from the 20 joint points of the human body using a Microsoft Kinect sensor. Results and Discussion: This feature is constructed by calculating the distances between each two joint points from the 20 joint points of the human body provided which is known as the Euclidean Distance Feature (EDF). The experiments are based on five scenarios, and a Linear Discriminant Classifier (LDC) is used to test the performance of the proposed method. Conclusions: The results of the experiments indicate that the proposed method outperforms previous work in all scenarios

    Gait Recognition from Motion Capture Data

    Full text link
    Gait recognition from motion capture data, as a pattern classification discipline, can be improved by the use of machine learning. This paper contributes to the state-of-the-art with a statistical approach for extracting robust gait features directly from raw data by a modification of Linear Discriminant Analysis with Maximum Margin Criterion. Experiments on the CMU MoCap database show that the suggested method outperforms thirteen relevant methods based on geometric features and a method to learn the features by a combination of Principal Component Analysis and Linear Discriminant Analysis. The methods are evaluated in terms of the distribution of biometric templates in respective feature spaces expressed in a number of class separability coefficients and classification metrics. Results also indicate a high portability of learned features, that means, we can learn what aspects of walk people generally differ in and extract those as general gait features. Recognizing people without needing group-specific features is convenient as particular people might not always provide annotated learning data. As a contribution to reproducible research, our evaluation framework and database have been made publicly available. This research makes motion capture technology directly applicable for human recognition.Comment: Preprint. Full paper accepted at the ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), special issue on Representation, Analysis and Recognition of 3D Humans. 18 pages. arXiv admin note: substantial text overlap with arXiv:1701.00995, arXiv:1609.04392, arXiv:1609.0693

    Walker-Independent Features for Gait Recognition from Motion Capture Data

    Get PDF
    MoCap-based human identification, as a pattern recognition discipline, can be optimized using a machine learning approach. Yet in some applications such as video surveillance new identities can appear on the fly and labeled data for all encountered people may not always be available. This work introduces the concept of learning walker-independent gait features directly from raw joint coordinates by a modification of the Fisher’s Linear Discriminant Analysis with Maximum Margin Criterion. Our new approach shows not only that these features can discriminate different people than who they are learned on, but also that the number of learning identities can be much smaller than the number of walkers encountered in the real operation

    An Evaluation Framework and Database for MoCap-Based Gait Recognition Methods

    Get PDF
    As a contribution to reproducible research, this paper presents a framework and a database to improve the development, evaluation and comparison of methods for gait recognition from Motion Capture (MoCap) data. The evaluation framework provides implementation details and source codes of state-of-the-art human-interpretable geometric features as well as our own approaches where gait features are learned by a modification of Fisher's Linear Discriminant Analysis with the Maximum Margin Criterion, and by a combination of Principal Component Analysis and Linear Discriminant Analysis. It includes a description and source codes of a mechanism for evaluating four class separability coefficients of feature space and four rank-based classifier performance metrics. This framework also contains a tool for learning a custom classifier and for classifying a custom query on a custom gallery. We provide an experimental database along with source codes for its extraction from the general CMU MoCap database

    Skeleton based gait recognition for long and baggy clothes

    Get PDF
    Human gait is a significant biometric feature used for the identification of people by their style of walking. Gait offers recognition from a distance at low resolution while requiring no user interaction. On the other hand, other biometrics are likely to require a certain level of interaction. In this paper, a human gait recognition method is presented to identify people who are wearing long baggy clothes like Thobe and Abaya. Microsoft Kinect sensor is used as a tool to establish a skeleton based gait database. The skeleton joint positions are obtained and used to create five different datasets. Each dataset contained different combination of joints to explore their effectiveness. An evaluation experiment was carried out with 20 walking subjects, each having 25 walking sequences in total. The results achieved good recognition rates up to 97%

    動的特徴を用いた歩容認証:RNN 及び SVM の性能比較

    Get PDF
    This paper presents a gait recognition system that uses recurrent neural networks (RNNs) and support vector machines (SVMs) for identifying individuals. Our system extracts the spatiotemporal features of distances between the waist and various joint positions obtained by a Kinect sensor. These spatiotemporal features are invariant for a walking subject. To verify our system performance, we conducted tests using the data of 12 individuals. The data were divided into two datasets for training and testing. The RNNs and SVMs were trained for classification using the training dataset. SVMs achieved an average accuracy of over 99% for the test dataset, whereas the average accuracy of RNNs was 94%

    Human Gait Recognition from Motion Capture Data in Signature Poses

    Get PDF
    Most contribution to the field of structure-based human gait recognition has been done through design of extraordinary gait features. Many research groups that address this topic introduce a unique combination of gait features, select a couple of well-known object classiers, and test some variations of their methods on their custom Kinect databases. For a practical system, it is not necessary to invent an ideal gait feature -- there have been many good geometric features designed -- but to smartly process the data there are at our disposal. This work proposes a gait recognition method without design of novel gait features; instead, we suggest an effective and highly efficient way of processing known types of features. Our method extracts a couple of joint angles from two signature poses within a gait cycle to form a gait pattern descriptor, and classifies the query subject by the baseline 1-NN classier. Not only are these poses distinctive enough, they also rarely accommodate motion irregularities that would result in confusion of identities. We experimentally demonstrate that our gait recognition method outperforms other relevant methods in terms of recognition rate and computational complexity. Evaluations were performed on an experimental database that precisely simulates street-level video surveillance environment

    Two Hand Gesture Based 3D Navigation in Virtual Environments

    Get PDF
    Natural interaction is gaining popularity due to its simple, attractive, and realistic nature, which realizes direct Human Computer Interaction (HCI). In this paper, we presented a novel two hand gesture based interaction technique for 3 dimensional (3D) navigation in Virtual Environments (VEs). The system used computer vision techniques for the detection of hand gestures (colored thumbs) from real scene and performed different navigation (forward, backward, up, down, left, and right) tasks in the VE. The proposed technique also allow users to efficiently control speed during navigation. The proposed technique is implemented via a VE for experimental purposes. Forty (40) participants performed the experimental study. Experiments revealed that the proposed technique is feasible, easy to learn and use, having less cognitive load on users. Finally gesture recognition engines were used to assess the accuracy and performance of the proposed gestures. kNN achieved high accuracy rates (95.7%) as compared to SVM (95.3%). kNN also has high performance rates in terms of training time (3.16 secs) and prediction speed (6600 obs/sec) as compared to SVM with 6.40 secs and 2900 obs/sec
    corecore