31,518 research outputs found

    Image features and seasons revisited

    Get PDF
    We present an evaluation of standard image features in the context of long-term visual teach-and-repeat mobile robot navigation, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that in the given long-term scenario, the viewpoint, scale and rotation invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We evaluate the image feature extractors on three datasets collected by mobile robots in two different outdoor environments over the course of one year. Based on this analysis, we propose a novel feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the GRIEF feature descriptor outperforms the other ones while being computationally more efficient

    Training a Convolutional Neural Network for Appearance-Invariant Place Recognition

    Full text link
    Place recognition is one of the most challenging problems in computer vision, and has become a key part in mobile robotics and autonomous driving applications for performing loop closure in visual SLAM systems. Moreover, the difficulty of recognizing a revisited location increases with appearance changes caused, for instance, by weather or illumination variations, which hinders the long-term application of such algorithms in real environments. In this paper we present a convolutional neural network (CNN), trained for the first time with the purpose of recognizing revisited locations under severe appearance changes, which maps images to a low dimensional space where Euclidean distances represent place dissimilarity. In order for the network to learn the desired invariances, we train it with triplets of images selected from datasets which present a challenging variability in visual appearance. The triplets are selected in such way that two samples are from the same location and the third one is taken from a different place. We validate our system through extensive experimentation, where we demonstrate better performance than state-of-art algorithms in a number of popular datasets

    Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions

    Get PDF
    Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera pose estimates. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on 6DOF camera pose estimation accuracy through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions, showing that long-term localization is far from solved, and propose promising avenues for future work, including sequence-based localization approaches and the need for better local features. Our benchmark is available at visuallocalization.net.Comment: Accepted to CVPR 2018 as a spotligh

    Time delays for 11 gravitationally lensed quasars revisited

    Full text link
    We test the robustness of published time delays for 11 lensed quasars by using two techniques to measure time shifts in their light curves. We chose to use two fundamentally different techniques to determine time delays in gravitationally lensed quasars: a method based on fitting a numerical model and another one derived from the minimum dispersion method introduced by Pelt and collaborators. To analyse our sample in a homogeneous way and avoid bias caused by the choice of the method used, we apply both methods to 11 different lensed systems for which delays have been published: JVAS B0218+357, SBS 0909+523, RX J0911+0551, FBQS J0951+2635, HE 1104-1805, PG 1115+080, JVAS B1422+231, SBS 1520+530, CLASS B1600+434, CLASS B1608+656, and HE 2149-2745 Time delays for three double lenses, JVAS B0218+357, HE 1104-1805, and CLASS B1600+434, as well as the quadruply lensed quasar CLASS B1608+656 are confirmed within the error bars. We correct the delay for SBS 1520+530. For PG 1115+080 and RX J0911+0551, the existence of a second solution on top of the published delay is revealed. The time delays in four systems, SBS 0909+523, FBQS J0951+2635, JVAS B1422+231, and HE 2149-2745 prove to be less reliable than previously claimed. If we wish to derive an estimate of H_0 based on time delays in gravitationally lensed quasars, we need to obtain more robust light curves for most of these systems in order to achieve a higher accuracy and robustness on the time delays

    A Cross-Season Correspondence Dataset for Robust Semantic Segmentation

    Full text link
    In this paper, we present a method to utilize 2D-2D point matches between images taken during different image conditions to train a convolutional neural network for semantic segmentation. Enforcing label consistency across the matches makes the final segmentation algorithm robust to seasonal changes. We describe how these 2D-2D matches can be generated with little human interaction by geometrically matching points from 3D models built from images. Two cross-season correspondence datasets are created providing 2D-2D matches across seasonal changes as well as from day to night. The datasets are made publicly available to facilitate further research. We show that adding the correspondences as extra supervision during training improves the segmentation performance of the convolutional neural network, making it more robust to seasonal changes and weather conditions.Comment: In Proc. CVPR 201

    Efficient 2D-3D Matching for Multi-Camera Visual Localization

    Full text link
    Visual localization, i.e., determining the position and orientation of a vehicle with respect to a map, is a key problem in autonomous driving. We present a multicamera visual inertial localization algorithm for large scale environments. To efficiently and effectively match features against a pre-built global 3D map, we propose a prioritized feature matching scheme for multi-camera systems. In contrast to existing works, designed for monocular cameras, we (1) tailor the prioritization function to the multi-camera setup and (2) run feature matching and pose estimation in parallel. This significantly accelerates the matching and pose estimation stages and allows us to dynamically adapt the matching efforts based on the surrounding environment. In addition, we show how pose priors can be integrated into the localization system to increase efficiency and robustness. Finally, we extend our algorithm by fusing the absolute pose estimates with motion estimates from a multi-camera visual inertial odometry pipeline (VIO). This results in a system that provides reliable and drift-less pose estimation. Extensive experiments show that our localization runs fast and robust under varying conditions, and that our extended algorithm enables reliable real-time pose estimation.Comment: 7 pages, 5 figure
    corecore