Image-Based Localization Using Deep Neural Networks

Abstract

Image-based localization, or camera relocalization, is a fundamental problem in computer vision and robotics, and it refers to estimating camera pose from an image. It is a key component of many computer vision applications such as navigating autonomous vehicles and mobile robotics, simultaneous localization and mapping (SLAM), and augmented reality. Currently, there are plenty of image-based localization methods proposed in the literature. Most state-of-the-art approaches are based on hand-crafted local features, such as SIFT, ORB, or SURF, and efficient 2D-to-3D matching using a 3D model. However, the limitations of the hand-crafted feature detector and descriptor become the bottleneck of these approaches. Recently, some promising deep neural network based localization approaches have been proposed. These approaches directly formulate 6 DoF pose estimation as a regression problem or use neural networks for generating 2D-3D correspondences, and thus no feature extraction or feature matching processes are required. In this thesis, we first review two state-of-the-art approaches for image-based localization. The first approach is conventional hand-crafted local feature based (Active Search) and the second one is novel deep neural network based (DSAC). Building on the idea of DSAC, we then examine the use of conventional RANSAC and introduce a novel full-frame Coordinate CNN. We evaluate these methods on the 7-Scenes dataset of Microsoft Research, and extensive comparisons are made. The results show that our modifications to the original DSAC pipeline lead to better performance than the two state-of-the-art approaches

    Similar works