8 research outputs found
Leveraging Deep Visual Descriptors for Hierarchical Efficient Localization
Many robotics applications require precise pose estimates despite operating
in large and changing environments. This can be addressed by visual
localization, using a pre-computed 3D model of the surroundings. The pose
estimation then amounts to finding correspondences between 2D keypoints in a
query image and 3D points in the model using local descriptors. However,
computational power is often limited on robotic platforms, making this task
challenging in large-scale environments. Binary feature descriptors
significantly speed up this 2D-3D matching, and have become popular in the
robotics community, but also strongly impair the robustness to perceptual
aliasing and changes in viewpoint, illumination and scene structure. In this
work, we propose to leverage recent advances in deep learning to perform an
efficient hierarchical localization. We first localize at the map level using
learned image-wide global descriptors, and subsequently estimate a precise pose
from 2D-3D matches computed in the candidate places only. This restricts the
local search and thus allows to efficiently exploit powerful non-binary
descriptors usually dismissed on resource-constrained devices. Our approach
results in state-of-the-art localization performance while running in real-time
on a popular mobile platform, enabling new prospects for robotics research.Comment: CoRL 2018 Camera-ready (fix typos and update citations
Augmenting Visual Place Recognition with Structural Cues
In this paper, we propose to augment image-based place recognition with
structural cues. Specifically, these structural cues are obtained using
structure-from-motion, such that no additional sensors are needed for place
recognition. This is achieved by augmenting the 2D convolutional neural network
(CNN) typically used for image-based place recognition with a 3D CNN that takes
as input a voxel grid derived from the structure-from-motion point cloud. We
evaluate different methods for fusing the 2D and 3D features and obtain best
performance with global average pooling and simple concatenation. On the Oxford
RobotCar dataset, the resulting descriptor exhibits superior recognition
performance compared to descriptors extracted from only one of the input
modalities, including state-of-the-art image-based descriptors. Especially at
low descriptor dimensionalities, we outperform state-of-the-art descriptors by
up to 90%.Comment: 8 pages, published in RA-L & IROS 202