1,799 research outputs found

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    Learning invariant representations and applications to face verification

    Get PDF
    One approach to computer object recognition and modeling the brain's ventral stream involves unsupervised learning of representations that are invariant to common transformations. However, applications of these ideas have usually been limited to 2D affine transformations, e.g., translation and scaling, since they are easiest to solve via convolution. In accord with a recent theory of transformation-invariance, we propose a model that, while capturing other common convolutional networks as special cases, can also be used with arbitrary identity-preserving transformations. The model's wiring can be learned from videos of transforming objects---or any other grouping of images into sets by their depicted object. Through a series of successively more complex empirical tests, we study the invariance/discriminability properties of this model with respect to different transformations. First, we empirically confirm theoretical predictions for the case of 2D affine transformations. Next, we apply the model to non-affine transformations: as expected, it performs well on face verification tasks requiring invariance to the relatively smooth transformations of 3D rotation-in-depth and changes in illumination direction. Surprisingly, it can also tolerate clutter transformations'' which map an image of a face on one background to an image of the same face on a different background. Motivated by these empirical findings, we tested the same model on face verification benchmark tasks from the computer vision literature: Labeled Faces in the Wild, PubFig and a new dataset we gathered---achieving strong performance in these highly unconstrained cases as well.

    Geometric and Photometric Data Fusion in Non-Rigid Shape Analysis

    Get PDF
    In this paper, we explore the use of the diffusion geometry framework for the fusion of geometric and photometric information in local and global shape descriptors. Our construction is based on the definition of a diffusion process on the shape manifold embedded into a high-dimensional space where the embedding coordinates represent the photometric information. Experimental results show that such data fusion is useful in coping with different challenges of shape analysis where pure geometric and pure photometric methods fai

    Shape localization, quantification and correspondence using Region Matching Algorithm

    Get PDF
    We propose a method for local, region-based matching of planar shapes, especially as those shapes that change over time. This is a problem fundamental to medical imaging, specifically the comparison over time of mammograms. The method is based on the non-emergence and non-enhancement of maxima, as well as the causality principle of integral invariant scale space. The core idea of our Region Matching Algorithm (RMA) is to divide a shape into a number of “salient” regions and then to compare all such regions for local similarity in order to quantitatively identify new growths or partial/complete occlusions. The algorithm has several advantages over commonly used methods for shape comparison of segmented regions. First, it provides improved key-point alignment for optimal shape correspondence. Second, it identifies localized changes such as new growths as well as complete/partial occlusion in corresponding regions by dividing the segmented region into sub-regions based upon the extrema that persist over a sufficient range of scales. Third, the algorithm does not depend upon the spatial locations of mammographic features and eliminates the need for registration to identify salient changes over time. Finally, the algorithm is fast to compute and requires no human intervention. We apply the method to temporal pairs of mammograms in order to detect potentially important differences between them

    Mobile Robot Localization using Panoramic Vision and Combinations of Feature Region Detectors

    Get PDF
    IEEE International Conference on Robotics and Automation (ICRA 2008, Pasadena, California, May 19-23, 2008), pp. 538-543.This paper presents a vision-based approach for mobile robot localization. The environmental model is topological. The new approach uses a constellation of different types of affine covariant regions to characterize a place. This type of representation permits a reliable and distinctive environment modeling. The performance of the proposed approach is evaluated using a database of panoramic images from different rooms. Additionally, we compare different combinations of complementary feature region detectors to find the one that achieves the best results. Our experimental results show promising results for this new localization method. Additionally, similarly to what happens with single detectors, different combinations exhibit different strengths and weaknesses depending on the situation, suggesting that a context-aware method to combine the different detectors would improve the localization results.This work was partially supported by USC Women in Science and Engineering (WiSE), the FI grant from the Generalitat de Catalunya, the European Social Fund, and the MID-CBR project grant TIN2006-15140-C03-01 and FEDER funds and the grant 2005-SGR-00093
    corecore