31 research outputs found

    Multiple-shot Human Re-Identification by Mean Riemannian Covariance Grid

    Get PDF
    International audienceHuman re-identification is defined as a requirement to determine whether a given individual has already appeared over a network of cameras. This problem is particularly hard by significant appearance changes across different camera views. In order to re-identify people a human signature should handle difference in illumination, pose and camera parameters. We propose a new appearance model combining information from multiple images to obtain highly discriminative human signature, called Mean Riemannian Covariance Grid (MRCG). The method is evaluated and compared with the state of the art using benchmark video sequences from the ETHZ and the i-LIDS datasets. We demonstrate that the proposed approach outperforms state of the art methods. Finally, the results of our approach are shown on two other more pertinent datasets

    Human Re-identification with Global and Local Siamese Convolution Neural Network

    Get PDF
    Human re-identification is an important task in surveillance system to determine whether the same human re-appears in multiple cameras with disjoint views. Mostly, appearance based approaches are used to perform human re-identification task because they are less constrained than biometric based approaches. Most of the research works apply hand-crafted feature extractors and then simple matching methods are used. However, designing a robust and stable feature requires expert knowledge and takes time to tune the features. In this paper, we propose a global and local structure of Siamese Convolution Neural Network which automatically extracts features from input images to perform human re-identification task. Besides, most of the current human re-identification task in single-shot approaches do not consider occlusion issue due to lack of tracking information. Therefore, we apply a decision fusion technique to combine global and local features for occlusion cases in single-shot approaches

    Review on Human Re-identification with Multiple Cameras

    Get PDF
    Human re-identification is the core task in most surveillance systems and it is aimed at matching human pairs from different non-overlapping cameras. There are several challenging issues that need to be overcome to achieve reidentification, such as overcoming the variations in viewpoint, pose, image resolution, illumination and occlusion. In this study, we review existing works in human re-identification task. Advantages and limitations of recent works are discussed. At the end, this paper suggests some future research directions for human re-identification

    Real-time Person Re-identification at the Edge: A Mixed Precision Approach

    Full text link
    A critical part of multi-person multi-camera tracking is person re-identification (re-ID) algorithm, which recognizes and retains identities of all detected unknown people throughout the video stream. Many re-ID algorithms today exemplify state of the art results, but not much work has been done to explore the deployment of such algorithms for computation and power constrained real-time scenarios. In this paper, we study the effect of using a light-weight model, MobileNet-v2 for re-ID and investigate the impact of single (FP32) precision versus half (FP16) precision for training on the server and inference on the edge nodes. We further compare the results with the baseline model which uses ResNet-50 on state of the art benchmarks including CUHK03, Market-1501, and Duke-MTMC. The MobileNet-V2 mixed precision training method can improve both inference throughput on the edge node, and training time on server 3.25×3.25\times reaching to 27.77fps and 1.75×1.75\times, respectively and decreases power consumption on the edge node by 1.45×1.45\times, while it deteriorates accuracy only 5.6\% in respect to ResNet-50 single precision on the average for three different datasets. The code and pre-trained networks are publicly available at https://github.com/TeCSAR-UNCC/person-reid.Comment: This is a pre-print of an article published in International Conference on Image Analysis and Recognition (ICIAR 2019), Lecture Notes in Computer Science. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-27272-2_

    A Novel Visual Word Co-occurrence Model for Person Re-identification

    Full text link
    Person re-identification aims to maintain the identity of an individual in diverse locations through different non-overlapping camera views. The problem is fundamentally challenging due to appearance variations resulting from differing poses, illumination and configurations of camera views. To deal with these difficulties, we propose a novel visual word co-occurrence model. We first map each pixel of an image to a visual word using a codebook, which is learned in an unsupervised manner. The appearance transformation between camera views is encoded by a co-occurrence matrix of visual word joint distributions in probe and gallery images. Our appearance model naturally accounts for spatial similarities and variations caused by pose, illumination & configuration change across camera views. Linear SVMs are then trained as classifiers using these co-occurrence descriptors. On the VIPeR and CUHK Campus benchmark datasets, our method achieves 83.86% and 85.49% at rank-15 on the Cumulative Match Characteristic (CMC) curves, and beats the state-of-the-art results by 10.44% and 22.27%.Comment: Accepted at ECCV Workshop on Visual Surveillance and Re-Identification, 201
    corecore