3,151 research outputs found
The effect of time on gait recognition performance
Many studies have shown that it is possible to recognize people by the way they walk. However, there are a number of covariate factors that affect recognition performance. The time between capturing the gallery and the probe has been reported to affect recognition the most. To date, no study has shown the isolated effect of time, irrespective of other covariates. Here we present the first principled study that examines the effect of elapsed time on gait recognition. Using empirical evidence we show for the first time that elapsed time does not affect recognition significantly in the short to medium term. By controlling the clothing worn by the subjects and the environment, a Correct Classification Rate (CCR) of 95% has been achieved over 9 months, on a dataset of 2280 gait samples. Our results show that gait can be used as a reliable biometric over time and at a distance. We have created a new multimodal temporal database to enable the research community to investigate various gait and face covariates. We have also investigated the effect of different type of clothes, variations in speed and footwear on the recognition performance. We have demonstrated that clothing drastically affects performance regardless of elapsed time and significantly more than any of the other covariates that we have considered here. The research then suggests a move towards developing appearance invariant recognition algorithms. Thi
Covariate conscious approach for Gait recognition based upon Zernike moment invariants
Gait recognition i.e. identification of an individual from his/her walking
pattern is an emerging field. While existing gait recognition techniques
perform satisfactorily in normal walking conditions, there performance tend to
suffer drastically with variations in clothing and carrying conditions. In this
work, we propose a novel covariate cognizant framework to deal with the
presence of such covariates. We describe gait motion by forming a single 2D
spatio-temporal template from video sequence, called Average Energy Silhouette
image (AESI). Zernike moment invariants (ZMIs) are then computed to screen the
parts of AESI infected with covariates. Following this, features are extracted
from Spatial Distribution of Oriented Gradients (SDOGs) and novel Mean of
Directional Pixels (MDPs) methods. The obtained features are fused together to
form the final well-endowed feature set. Experimental evaluation of the
proposed framework on three publicly available datasets i.e. CASIA dataset B,
OU-ISIR Treadmill dataset B and USF Human-ID challenge dataset with recently
published gait recognition approaches, prove its superior performance.Comment: 11 page
Micro-doppler-based in-home aided and unaided walking recognition with multiple radar and sonar systems
Published in IET Radar, Sonar and Navigation. Online first 21/06/2016.The potential for using micro-Doppler signatures as a basis for distinguishing between aided and unaided gaits is considered in this study for the purpose of characterising normal elderly gait and assessment of patient recovery. In particular, five different classes of mobility are considered: normal unaided walking, walking with a limp, walking using a cane or tripod, walking with a walker, and using a wheelchair. This presents a challenging classification problem as the differences in micro-Doppler for these activities can be quite slight. Within this context, the performance of four different radar and sonar systems – a 40 kHz sonar, a 5.8 GHz wireless pulsed Doppler radar mote, a 10 GHz X-band continuous wave (CW) radar, and a 24 GHz CW radar – is evaluated using a broad range of features. Performance improvements using feature selection is addressed as well as the impact on performance of sensor placement and potential occlusion due to household objects. Results show that nearly 80% correct classification can be achieved with 10 s observations from the 24 GHz CW radar, whereas 86% performance can be achieved with 5 s observations of sonar
Gait recognition based on shape and motion analysis of silhouette contours
This paper presents a three-phase gait recognition method that analyses the spatio-temporal shape and dynamic motion (STS-DM) characteristics of a human subject’s silhouettes to identify the subject in the presence of most of the challenging factors that affect existing gait recognition systems. In phase 1, phase-weighted magnitude spectra of the Fourier descriptor of the silhouette contours at ten phases of a gait period are used to analyse the spatio-temporal changes of the subject’s shape. A component-based Fourier descriptor based on anatomical studies of human body is used to achieve robustness against shape variations caused by all common types of small carrying conditions with folded hands, at the subject’s back and in upright position. In phase 2, a full-body shape and motion analysis is performed by fitting ellipses to contour segments of ten phases of a gait period and using a histogram matching with Bhattacharyya distance of parameters of the ellipses as dissimilarity scores. In phase 3, dynamic time warping is used to analyse the angular rotation pattern of the subject’s leading knee with a consideration of arm-swing over a gait period to achieve identification that is invariant to walking speed, limited clothing variations, hair style changes and shadows under feet. The match scores generated in the three phases are fused using weight-based score-level fusion for robust identification in the presence of missing and distorted frames, and occlusion in the scene. Experimental analyses on various publicly available data sets show that STS-DM outperforms several state-of-the-art gait recognition methods
Recurrent Attention Models for Depth-Based Person Identification
We present an attention-based model that reasons on human body shape and
motion dynamics to identify individuals in the absence of RGB information,
hence in the dark. Our approach leverages unique 4D spatio-temporal signatures
to address the identification problem across days. Formulated as a
reinforcement learning task, our model is based on a combination of
convolutional and recurrent neural networks with the goal of identifying small,
discriminative regions indicative of human identity. We demonstrate that our
model produces state-of-the-art results on several published datasets given
only depth images. We further study the robustness of our model towards
viewpoint, appearance, and volumetric changes. Finally, we share insights
gleaned from interpretable 2D, 3D, and 4D visualizations of our model's
spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201
- …