27 research outputs found

    Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras

    No full text
    Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/

    Gait recognition using normalized shadows

    Get PDF
    WOS:000426986000189 (NÂş de Acesso Web of Science)Surveillance of public spaces is often conducted with the help of cameras placed at elevated positions. Recently, drones with high resolution cameras have made it possible to perform overhead surveillance of critical spaces. However, images obtained in these conditions may not contain enough body features to allow conventional biometric recognition. This paper introduces a novel gait recognition system which uses the shadows cast by users, when available. It includes two main contributions: (i) a method for shadow segmentation, which analyzes the orientation of the silhouette contour to identify the feet position along time, in order to separate the body and shadow silhouettes connected at such positions; (ii) a method that normalizes the segmented shadow silhouettes, by applying a transformation derived from optimizing the low rank textures of a gait texture image, to compensate for changes in view and shadow orientation. The normalized shadow silhouettes can then undergo a gait recognition algorithm, which in this paper relies on the computation of a gait energy image, combined with linear discriminant analysis for user recognition. The proposed system outperforms the available state-of-the-art, being robust to changes in acquisition viewpoints.info:eu-repo/semantics/acceptedVersio

    View-invariant gait recognition exploiting spatio-temporal information and a dissimilarity metric

    Get PDF
    In gait recognition, when subjects do not follow a known walking trajectory, the comparison against a database may be rendered impossible. Some proposed solutions rely on learning and mapping the appearance of silhouettes along various views, with some limitations caused for instance by appearance changes (e.g. coats or bags). The present paper discusses this problem and proposes a novel solution for automatic viewing angle identification, using minimal information computed from the walking person silhouettes, while being robust against appearance changes. The proposed method is more efficient and provides improved results when compared to the available alternatives. Moreover, unlike most state-of-the- art methods, it does not require a training stage. The paper also discusses the use of a dissimilarity metric for the recognition stage. Dissimilarity metrics have shown interesting results in several recognition systems. This paper also attests the strength of a dissimilarity-based approach for gait recognition.info:eu-repo/semantics/acceptedVersio

    View-invariant gait recognition system using a gait energy image decomposition method

    Get PDF
    Gait recognition systems can capture biometrical information from a distance and without the user's active cooperation, making them suitable for surveillance environments. However, there are two challenges for gait recognition that need to be solved, namely when: (i) the walking direction is unknown and/or (ii) the subject's appearance changes significantly due to different clothes being worn or items being carried. This study discusses the problem of gait recognition in unconstrained environments and proposes a new system to tackle recognition when facing the two listed challenges. The system automatically identifies the walking direction using a perceptual hash (PHash) computed over the leg region of the gait energy image (GEI) and then compares it against the PHash values of different walking directions stored in the database. Robustness against appearance changes are obtained by decomposing the GEI into sections and selecting those sections unaltered by appearance changes for comparison against a database containing GEI sections for the identified walking direction. The proposed recognition method then recognises the user using a majority decision voting. The proposed view-invariant gait recognition system is computationally inexpensive and outperforms the state-of-the-art in terms of recognition performance.info:eu-repo/semantics/acceptedVersio

    Gait recognition based on shape and motion analysis of silhouette contours

    Get PDF
    This paper presents a three-phase gait recognition method that analyses the spatio-temporal shape and dynamic motion (STS-DM) characteristics of a human subject’s silhouettes to identify the subject in the presence of most of the challenging factors that affect existing gait recognition systems. In phase 1, phase-weighted magnitude spectra of the Fourier descriptor of the silhouette contours at ten phases of a gait period are used to analyse the spatio-temporal changes of the subject’s shape. A component-based Fourier descriptor based on anatomical studies of human body is used to achieve robustness against shape variations caused by all common types of small carrying conditions with folded hands, at the subject’s back and in upright position. In phase 2, a full-body shape and motion analysis is performed by fitting ellipses to contour segments of ten phases of a gait period and using a histogram matching with Bhattacharyya distance of parameters of the ellipses as dissimilarity scores. In phase 3, dynamic time warping is used to analyse the angular rotation pattern of the subject’s leading knee with a consideration of arm-swing over a gait period to achieve identification that is invariant to walking speed, limited clothing variations, hair style changes and shadows under feet. The match scores generated in the three phases are fused using weight-based score-level fusion for robust identification in the presence of missing and distorted frames, and occlusion in the scene. Experimental analyses on various publicly available data sets show that STS-DM outperforms several state-of-the-art gait recognition methods

    JCS-Net : joint classification and super-resolution network for small-scale pedestrian detection in surveillance images

    Get PDF
    While Convolutional Neural Network (CNN)-based pedestrian detection methods have proven to be successful in various applications, detecting small-scale pedestrian from surveillance images is still challenging.The major reason is that the small-scale pedestrians lack much detailed information compared to the large-scale pedestrians. To solve this problem, we propose to utilize the relationship between the large-scale pedestrians and the corresponding small-scale pedestrians to help recover the detailed information of the small-scale pedestrians, thus improving the performance of detecting small-scale pedestrians. Specifically, a unified network (called JCS-Net) is proposed for small-scale pedestrian detection, which integrates the classification task and the super-resolution task in a unified framework. As a result, the super-resolution and classification are fully engaged and the super-resolution sub-network can recover some useful detailed information for the subsequent classification. Based on HOG+LUV and JCS-Net, multi-layer channel features (MCF) are constructed to train the detector. Experimental results on the Caltech pedestrian dataset and the KITTI benchmark demonstrate the effectiveness of the proposed method. To further enhance the detection, multi-scale MCF based on JCS-Net for pedestrian detection is also proposed, which achieves the state-of-the-art performance

    Lidar-based Gait Analysis and Activity Recognition in a 4D Surveillance System

    Get PDF
    This paper presents new approaches for gait and activity analysis based on data streams of a Rotating Multi Beam (RMB) Lidar sensor. The proposed algorithms are embedded into an integrated 4D vision and visualization system, which is able to analyze and interactively display real scenarios in natural outdoor environments with walking pedestrians. The main focus of the investigations are gait based person re-identification during tracking, and recognition of specific activity patterns such as bending, waving, making phone calls and checking the time looking at wristwatches. The descriptors for training and recognition are observed and extracted from realistic outdoor surveillance scenarios, where multiple pedestrians are walking in the field of interest following possibly intersecting trajectories, thus the observations might often be affected by occlusions or background noise. Since there is no public database available for such scenarios, we created and published a new Lidar-based outdoors gait and activity dataset on our website, that contains point cloud sequences of 28 different persons extracted and aggregated from 35 minutes-long measurements. The presented results confirm that both efficient gait-based identification and activity recognition is achievable in the sparse point clouds of a single RMB Lidar sensor. After extracting the people trajectories, we synthesized a free-viewpoint video, where moving avatar models follow the trajectories of the observed pedestrians in real time, ensuring that the leg movements of the animated avatars are synchronized with the real gait cycles observed in the Lidar stream
    corecore