381 research outputs found
Efficient 2D-3D Matching for Multi-Camera Visual Localization
Visual localization, i.e., determining the position and orientation of a
vehicle with respect to a map, is a key problem in autonomous driving. We
present a multicamera visual inertial localization algorithm for large scale
environments. To efficiently and effectively match features against a pre-built
global 3D map, we propose a prioritized feature matching scheme for
multi-camera systems. In contrast to existing works, designed for monocular
cameras, we (1) tailor the prioritization function to the multi-camera setup
and (2) run feature matching and pose estimation in parallel. This
significantly accelerates the matching and pose estimation stages and allows us
to dynamically adapt the matching efforts based on the surrounding environment.
In addition, we show how pose priors can be integrated into the localization
system to increase efficiency and robustness. Finally, we extend our algorithm
by fusing the absolute pose estimates with motion estimates from a multi-camera
visual inertial odometry pipeline (VIO). This results in a system that provides
reliable and drift-less pose estimation. Extensive experiments show that our
localization runs fast and robust under varying conditions, and that our
extended algorithm enables reliable real-time pose estimation.Comment: 7 pages, 5 figure
Visual-Inertial Mapping with Non-Linear Factor Recovery
Cameras and inertial measurement units are complementary sensors for
ego-motion estimation and environment mapping. Their combination makes
visual-inertial odometry (VIO) systems more accurate and robust. For globally
consistent mapping, however, combining visual and inertial information is not
straightforward. To estimate the motion and geometry with a set of images large
baselines are required. Because of that, most systems operate on keyframes that
have large time intervals between each other. Inertial data on the other hand
quickly degrades with the duration of the intervals and after several seconds
of integration, it typically contains only little useful information.
In this paper, we propose to extract relevant information for visual-inertial
mapping from visual-inertial odometry using non-linear factor recovery. We
reconstruct a set of non-linear factors that make an optimal approximation of
the information on the trajectory accumulated by VIO. To obtain a globally
consistent map we combine these factors with loop-closing constraints using
bundle adjustment. The VIO factors make the roll and pitch angles of the global
map observable, and improve the robustness and the accuracy of the mapping. In
experiments on a public benchmark, we demonstrate superior performance of our
method over the state-of-the-art approaches
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
RD-VIO: Robust Visual-Inertial Odometry for Mobile Augmented Reality in Dynamic Environments
It is typically challenging for visual or visual-inertial odometry systems to
handle the problems of dynamic scenes and pure rotation. In this work, we
design a novel visual-inertial odometry (VIO) system called RD-VIO to handle
both of these two problems. Firstly, we propose an IMU-PARSAC algorithm which
can robustly detect and match keypoints in a two-stage process. In the first
state, landmarks are matched with new keypoints using visual and IMU
measurements. We collect statistical information from the matching and then
guide the intra-keypoint matching in the second stage. Secondly, to handle the
problem of pure rotation, we detect the motion type and adapt the
deferred-triangulation technique during the data-association process. We make
the pure-rotational frames into the special subframes. When solving the
visual-inertial bundle adjustment, they provide additional constraints to the
pure-rotational motion. We evaluate the proposed VIO system on public datasets.
Experiments show the proposed RD-VIO has obvious advantages over other methods
in dynamic environments
- …