2,225 research outputs found
An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor
This paper presents a novel tightly-coupled keyframe-based Simultaneous
Localization and Mapping (SLAM) system with loop-closing and relocalization
capabilities targeted for the underwater domain. Our previous work, SVIn,
augmented the state-of-the-art visual-inertial state estimation package OKVIS
to accommodate acoustic data from sonar in a non-linear optimization-based
framework. This paper addresses drift and loss of localization -- one of the
main problems affecting other packages in underwater domain -- by providing the
following main contributions: a robust initialization method to refine scale
using depth measurements, a fast preprocessing step to enhance the image
quality, and a real-time loop-closing and relocalization method using bag of
words (BoW). An additional contribution is the addition of depth measurements
from a pressure sensor to the tightly-coupled optimization formulation.
Experimental results on datasets collected with a custom-made underwater sensor
suite and an autonomous underwater vehicle from challenging underwater
environments with poor visibility demonstrate performance never achieved before
in terms of accuracy and robustness
Semantic Visual Localization
Robust visual localization under a wide range of viewing conditions is a
fundamental problem in computer vision. Handling the difficult cases of this
problem is not only very challenging but also of high practical relevance,
e.g., in the context of life-long localization for augmented reality or
autonomous robots. In this paper, we propose a novel approach based on a joint
3D geometric and semantic understanding of the world, enabling it to succeed
under conditions where previous approaches failed. Our method leverages a novel
generative model for descriptor learning, trained on semantic scene completion
as an auxiliary task. The resulting 3D descriptors are robust to missing
observations by encoding high-level 3D geometric and semantic information.
Experiments on several challenging large-scale localization datasets
demonstrate reliable localization under extreme viewpoint, illumination, and
geometry changes
LDSO: Direct Sparse Odometry with Loop Closure
In this paper we present an extension of Direct Sparse Odometry (DSO) to a
monocular visual SLAM system with loop closure detection and pose-graph
optimization (LDSO). As a direct technique, DSO can utilize any image pixel
with sufficient intensity gradient, which makes it robust even in featureless
areas. LDSO retains this robustness, while at the same time ensuring
repeatability of some of these points by favoring corner features in the
tracking frontend. This repeatability allows to reliably detect loop closure
candidates with a conventional feature-based bag-of-words (BoW) approach. Loop
closure candidates are verified geometrically and Sim(3) relative pose
constraints are estimated by jointly minimizing 2D and 3D geometric error
terms. These constraints are fused with a co-visibility graph of relative poses
extracted from DSO's sliding window optimization. Our evaluation on publicly
available datasets demonstrates that the modified point selection strategy
retains the tracking accuracy and robustness, and the integrated pose-graph
optimization significantly reduces the accumulated rotation-, translation- and
scale-drift, resulting in an overall performance comparable to state-of-the-art
feature-based systems, even without global bundle adjustment
Enhanced Image-Aided Navigation Algorithm with Automatic Calibration and Affine Distortion Prediction
This research aims at improving two key steps within the image aided navigation process: camera calibration and landmark tracking. The camera calibration step is improved by automating the point correspondence calculation within the standard camera calibration algorithm, thereby reducing the required time for calibration while maintaining the output model accuracy. The feature landmark tracking step is improved by digitally simulating affine distortions on input images in order to calculate more accurate feature descriptors for improved feature matching in high relative viewpoint change. These techniques are experimentally demonstrated in an outdoor environment with a consumer-grade inertial sensor and three imaging sensors, one of which is orthogonal to the rest. Using a tactical-grade inertial sensor coupled with GPS position data for comparison, the improved image aided navigation algorithm is shown to reduce navigation errors by 24% in position, 16% in velocity and 35% in attitude when compared to the standard image-aided navigation algorithm
Leveraging Deep Visual Descriptors for Hierarchical Efficient Localization
Many robotics applications require precise pose estimates despite operating
in large and changing environments. This can be addressed by visual
localization, using a pre-computed 3D model of the surroundings. The pose
estimation then amounts to finding correspondences between 2D keypoints in a
query image and 3D points in the model using local descriptors. However,
computational power is often limited on robotic platforms, making this task
challenging in large-scale environments. Binary feature descriptors
significantly speed up this 2D-3D matching, and have become popular in the
robotics community, but also strongly impair the robustness to perceptual
aliasing and changes in viewpoint, illumination and scene structure. In this
work, we propose to leverage recent advances in deep learning to perform an
efficient hierarchical localization. We first localize at the map level using
learned image-wide global descriptors, and subsequently estimate a precise pose
from 2D-3D matches computed in the candidate places only. This restricts the
local search and thus allows to efficiently exploit powerful non-binary
descriptors usually dismissed on resource-constrained devices. Our approach
results in state-of-the-art localization performance while running in real-time
on a popular mobile platform, enabling new prospects for robotics research.Comment: CoRL 2018 Camera-ready (fix typos and update citations
- …