277 research outputs found

    Gait Recognition By Walking and Running: A Model-Based Approach

    No full text
    Gait is an emerging biometric for which some techniques, mainly holistic, have been developed to recognise people by their walking patterns. However, the possibility of recognising people by the way they run remains largely unexplored. The new analytical model presented in this paper is based on the biomechanics of walking and running, and will serve as the foundation of an automatic person recognition system that is invariant to these distinct gaits. A bilateral and dynamically coupled oscillator is the key concept underlying this work. Analysis shows that this new model can be used to automatically describe walking and running subjects without parameter selection. Temporal template matching that takes into account the whole sequence of a gait cycle is applied to extract the angles of thigh and lower leg rotation. The phase-weighted magnitudes of the lower order Fourier components of these rotations form the gait signature. Classification of walking and running subjects is performed using the k-nearest-neighbour classifier. Recognition rates are similar to that achieved by other techniques with a similarly sized database. Future work will investigate feature set selection to improve the recognition rate and will determine the invariance attributes, for inter- and intra- class, of both walking and running

    Covariate conscious approach for Gait recognition based upon Zernike moment invariants

    Full text link
    Gait recognition i.e. identification of an individual from his/her walking pattern is an emerging field. While existing gait recognition techniques perform satisfactorily in normal walking conditions, there performance tend to suffer drastically with variations in clothing and carrying conditions. In this work, we propose a novel covariate cognizant framework to deal with the presence of such covariates. We describe gait motion by forming a single 2D spatio-temporal template from video sequence, called Average Energy Silhouette image (AESI). Zernike moment invariants (ZMIs) are then computed to screen the parts of AESI infected with covariates. Following this, features are extracted from Spatial Distribution of Oriented Gradients (SDOGs) and novel Mean of Directional Pixels (MDPs) methods. The obtained features are fused together to form the final well-endowed feature set. Experimental evaluation of the proposed framework on three publicly available datasets i.e. CASIA dataset B, OU-ISIR Treadmill dataset B and USF Human-ID challenge dataset with recently published gait recognition approaches, prove its superior performance.Comment: 11 page

    Recognizing complex faces and gaits via novel probabilistic models

    Get PDF
    In the field of computer vision, developing automated systems to recognize people under unconstrained scenarios is a partially solved problem. In unconstrained sce- narios a number of common variations and complexities such as occlusion, illumi- nation, cluttered background and so on impose vast uncertainty to the recognition process. Among the various biometrics that have been emerging recently, this dissertation focus on two of them namely face and gait recognition. Firstly we address the problem of recognizing faces with major occlusions amidst other variations such as pose, scale, expression and illumination using a novel PRObabilistic Component based Interpretation Model (PROCIM) inspired by key psychophysical principles that are closely related to reasoning under uncertainty. The model basically employs Bayesian Networks to establish, learn, interpret and exploit intrinsic similarity mappings from the face domain. Then, by incorporating e cient inference strategies, robust decisions are made for successfully recognizing faces under uncertainty. PROCIM reports improved recognition rates over recent approaches. Secondly we address the newly upcoming gait recognition problem and show that PROCIM can be easily adapted to the gait domain as well. We scienti cally de ne and formulate sub-gaits and propose a novel modular training scheme to e ciently learn subtle sub-gait characteristics from the gait domain. Our results show that the proposed model is robust to several uncertainties and yields sig- ni cant recognition performance. Apart from PROCIM, nally we show how a simple component based gait reasoning can be coherently modeled using the re- cently prominent Markov Logic Networks (MLNs) by intuitively fusing imaging, logic and graphs. We have discovered that face and gait domains exhibit interesting similarity map- pings between object entities and their components. We have proposed intuitive probabilistic methods to model these mappings to perform recognition under vari- ous uncertainty elements. Extensive experimental validations justi es the robust- ness of the proposed methods over the state-of-the-art techniques.

    Recognition of Human Periodic Movements From Unstructured Information Using A Motion-based Frequency Domain Approach

    Get PDF
    Feature-based motion cues play an important role in biological visual perception. We present a motion-based frequency-domain scheme for human periodic motion recognition. As a baseline study of feature based recognition we use unstructured feature-point kinematic data obtained directly from a marker-based optical motion capture (MoCap) system, rather than accommodate bootstrapping from the low-level image processing of feature detection. Motion power spectral analysis is applied to a set of unidentified trajectories of feature points representing whole body kinematics. Feature power vectors are extracted from motion power spectra and mapped to a low dimensionality of feature space as motion templates that offer frequency domain signatures to characterise different periodic motions. Recognition of a new instance of periodic motion against pre-stored motion templates is carried out by seeking best motion power spectral similarity. We test this method through nine examples of human periodic motion using MoCap data. The recognition results demonstrate that feature-based spectral analysis allows classification of periodic motions from low-level, un-structured interpretation without recovering underlying kinematics. Contrasting with common structure-based spatio-temporal approaches, this motion-based frequency-domain method avoids a time-consuming recovery of underlying kinematic structures in visual analysis and largely reduces the parameter domain in the presence of human motion irregularities

    Learning gender from human gaits and faces

    Get PDF
    Computer vision based gender classification is an important component in visual surveillance systems. In this paper, we investigate gender classification from human gaits in image sequences, a relatively understudied problem. Moreover, we propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of measurements, to fuse the two modalities at the feature level. Experiments demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2 % in large datasets. In this paper, we investigate gender classification from human gaits in image sequences using machine learning methods. Considering each modality, face or gait, in isolation has its inherent weakness and limitations, we further propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of signals, to fuse the two modalities at the feature level. Experiments on large dataset demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2%. We plot in Figure 1 the flow chart of our multimodal gender recognition system. 1

    An effective video processing pipeline for crowd pattern analysis

    Get PDF
    With the purpose of automatic detection of crowd patterns including abrupt and abnormal changes, a novel approach for extracting motion “textures” from dynamic Spatio-Temporal Volume (STV) blocks formulated by live video streams has been proposed. This paper starts from introducing the common approach for STV construction and corresponding Spatio-Temporal Texture (STT) extraction techniques. Next the crowd motion information contained within the random STT slices are evaluated based on the information entropy theory to cull the static background and noises occupying most of the STV spaces. A preprocessing step using Gabor filtering for improving the STT sampling efficiency and motion fidelity has been devised and tested. The technique has been applied on benchmarking video databases for proof-of-concept and performance evaluation. Preliminary results have shown encouraging outcomes and promising potentials for its real-world crowd monitoring and control applications
    corecore