19 research outputs found

    The effect of time on gait recognition performance

    No full text
    Many studies have shown that it is possible to recognize people by the way they walk. However, there are a number of covariate factors that affect recognition performance. The time between capturing the gallery and the probe has been reported to affect recognition the most. To date, no study has shown the isolated effect of time, irrespective of other covariates. Here we present the first principled study that examines the effect of elapsed time on gait recognition. Using empirical evidence we show for the first time that elapsed time does not affect recognition significantly in the short to medium term. By controlling the clothing worn by the subjects and the environment, a Correct Classification Rate (CCR) of 95% has been achieved over 9 months, on a dataset of 2280 gait samples. Our results show that gait can be used as a reliable biometric over time and at a distance. We have created a new multimodal temporal database to enable the research community to investigate various gait and face covariates. We have also investigated the effect of different type of clothes, variations in speed and footwear on the recognition performance. We have demonstrated that clothing drastically affects performance regardless of elapsed time and significantly more than any of the other covariates that we have considered here. The research then suggests a move towards developing appearance invariant recognition algorithms. Thi

    On reducing the effect of silhouette quality on Iindividual gait recognition : a feature fusion approach

    Get PDF
    The quality of the extracted gait silhouettes can hinder the performance and practicability of gait recognition algorithms. In this paper, we propose a framework that integrates a feature fusion approach to improve recognition rate under this situation. Specifically, we first generate a dataset containing gait silhouettes with various qualities based on the CASIA Dataset B. We then fuse gallery data with different qualities and project data into embedded subspaces.We perform classification based on the Euclidean distances between fused gallery features and probe features. Experimental results show that the proposed framework can provide important improvements on recognition rate

    The influence of segmentation on individual gait recognition

    Get PDF
    The quality of the extracted gait silhouettes can hinder the performance and practicability of gait recognition algorithms. In this paper, we analyse the influence of silhouette quality caused by segmentation disparities, and propose a feature fusion strategy to improve recognition accuracy. Specifically, we first generate a dataset containing gait silhouette with various qualities generated by different segmentation algorithms, based on the CASIA Dataset B. We then project data into an embedded subspace, and fuse gallery features of different quality levels. To this end, we propose a fusion strategy based on Least Square QR-decomposition method. We perform classification based on the Euclidean distance between fused gallery features and probe features. Evaluation results show that the proposed fusion strategy attains important improvements on recognition accuracy

    Gait recognition based on shape and motion analysis of silhouette contours

    Get PDF
    This paper presents a three-phase gait recognition method that analyses the spatio-temporal shape and dynamic motion (STS-DM) characteristics of a human subject’s silhouettes to identify the subject in the presence of most of the challenging factors that affect existing gait recognition systems. In phase 1, phase-weighted magnitude spectra of the Fourier descriptor of the silhouette contours at ten phases of a gait period are used to analyse the spatio-temporal changes of the subject’s shape. A component-based Fourier descriptor based on anatomical studies of human body is used to achieve robustness against shape variations caused by all common types of small carrying conditions with folded hands, at the subject’s back and in upright position. In phase 2, a full-body shape and motion analysis is performed by fitting ellipses to contour segments of ten phases of a gait period and using a histogram matching with Bhattacharyya distance of parameters of the ellipses as dissimilarity scores. In phase 3, dynamic time warping is used to analyse the angular rotation pattern of the subject’s leading knee with a consideration of arm-swing over a gait period to achieve identification that is invariant to walking speed, limited clothing variations, hair style changes and shadows under feet. The match scores generated in the three phases are fused using weight-based score-level fusion for robust identification in the presence of missing and distorted frames, and occlusion in the scene. Experimental analyses on various publicly available data sets show that STS-DM outperforms several state-of-the-art gait recognition methods

    Human motion analysis and simulation tools: a survey

    Get PDF
    Computational systems to identify objects represented in image sequences and tracking their motion in a fully automatic manner, enabling a detailed analysis of the involved motion and its simulation are extremely relevant in several fields of our society. In particular, the analysis and simulation of the human motion has a wide spectrum of relevant applications with a manifest social and economic impact. In fact, usage of human motion data is fundamental in a broad number of domains (e.g.: sports, rehabilitation, robotics, surveillance, gesture-based user interfaces, etc.). Consequently, many relevant engineering software applications have been developed with the purpose of analyzing and/or simulating the human motion. This chapter presents a detailed, broad and up to date survey on motion simulation and/or analysis software packages that have been developed either by the scientific community or commercial entities. Moreover, a main contribution of this chapter is an effective framework to classify and compare motion simulation and analysis tools

    Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network

    Get PDF
    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN’s input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns

    Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network

    Get PDF
    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN’s input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns
    corecore