11,439 research outputs found

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes

    Face recognition technologies for evidential evaluation of video traces

    Get PDF
    Human recognition from video traces is an important task in forensic investigations and evidence evaluations. Compared with other biometric traits, face is one of the most popularly used modalities for human recognition due to the fact that its collection is non-intrusive and requires less cooperation from the subjects. Moreover, face images taken at a long distance can still provide reasonable resolution, while most biometric modalities, such as iris and fingerprint, do not have this merit. In this chapter, we discuss automatic face recognition technologies for evidential evaluations of video traces. We first introduce the general concepts in both forensic and automatic face recognition , then analyse the difficulties in face recognition from videos . We summarise and categorise the approaches for handling different uncontrollable factors in difficult recognition conditions. Finally we discuss some challenges and trends in face recognition research in both forensics and biometrics . Given its merits tested in many deployed systems and great potential in other emerging applications, considerable research and development efforts are expected to be devoted in face recognition in the near future

    HPatches: A benchmark and evaluation of handcrafted and learned local descriptors

    Full text link
    In this paper, we propose a novel benchmark for evaluating local image descriptors. We demonstrate that the existing datasets and evaluation protocols do not specify unambiguously all aspects of evaluation, leading to ambiguities and inconsistencies in results reported in the literature. Furthermore, these datasets are nearly saturated due to the recent improvements in local descriptors obtained by learning them from large annotated datasets. Therefore, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and classification. This allows for more realistic, and thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors can boost their performance to the level of deep learning based descriptors within a realistic benchmarks evaluation

    Training a Convolutional Neural Network for Appearance-Invariant Place Recognition

    Full text link
    Place recognition is one of the most challenging problems in computer vision, and has become a key part in mobile robotics and autonomous driving applications for performing loop closure in visual SLAM systems. Moreover, the difficulty of recognizing a revisited location increases with appearance changes caused, for instance, by weather or illumination variations, which hinders the long-term application of such algorithms in real environments. In this paper we present a convolutional neural network (CNN), trained for the first time with the purpose of recognizing revisited locations under severe appearance changes, which maps images to a low dimensional space where Euclidean distances represent place dissimilarity. In order for the network to learn the desired invariances, we train it with triplets of images selected from datasets which present a challenging variability in visual appearance. The triplets are selected in such way that two samples are from the same location and the third one is taken from a different place. We validate our system through extensive experimentation, where we demonstrate better performance than state-of-art algorithms in a number of popular datasets

    Deep Shape Matching

    Full text link
    We cast shape matching as metric learning with convolutional networks. We break the end-to-end process of image representation into two parts. Firstly, well established efficient methods are chosen to turn the images into edge maps. Secondly, the network is trained with edge maps of landmark images, which are automatically obtained by a structure-from-motion pipeline. The learned representation is evaluated on a range of different tasks, providing improvements on challenging cases of domain generalization, generic sketch-based image retrieval or its fine-grained counterpart. In contrast to other methods that learn a different model per task, object category, or domain, we use the same network throughout all our experiments, achieving state-of-the-art results in multiple benchmarks.Comment: ECCV 201

    Eye Detection and Face Recognition Across the Electromagnetic Spectrum

    Get PDF
    Biometrics, or the science of identifying individuals based on their physiological or behavioral traits, has increasingly been used to replace typical identifying markers such as passwords, PIN numbers, passports, etc. Different modalities, such as face, fingerprint, iris, gait, etc. can be used for this purpose. One of the most studied forms of biometrics is face recognition (FR). Due to a number of advantages over typical visible to visible FR, recent trends have been pushing the FR community to perform cross-spectral matching of visible images to face images from higher spectra in the electromagnetic spectrum.;In this work, the SWIR band of the EM spectrum is the primary focus. Four main contributions relating to automatic eye detection and cross-spectral FR are discussed. First, a novel eye localization algorithm for the purpose of geometrically normalizing a face across multiple SWIR bands for FR algorithms is introduced. Using a template based scheme and a novel summation range filter, an extensive experimental analysis show that this algorithm is fast, robust, and highly accurate when compared to other available eye detection methods. Also, the eye locations produced by this algorithm provides higher FR results than all other tested approaches. This algorithm is then augmented and updated to quickly and accurately detect eyes in more challenging unconstrained datasets, spanning the EM spectrum. Additionally, a novel cross-spectral matching algorithm is introduced that attempts to bridge the gap between the visible and SWIR spectra. By fusing multiple photometric normalization combinations, the proposed algorithm is not only more efficient than other visible-SWIR matching algorithms, but more accurate in multiple challenging datasets. Finally, a novel pre-processing algorithm is discussed that bridges the gap between document (passport) and live face images. It is shown that the pre-processing scheme proposed, using inpainting and denoising techniques, significantly increases the cross-document face recognition performance
    corecore