3,766 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Keyframe-based monocular SLAM: design, survey, and future directions
Extensive research in the field of monocular SLAM for the past fifteen years
has yielded workable systems that found their way into various applications in
robotics and augmented reality. Although filter-based monocular SLAM systems
were common at some time, the more efficient keyframe-based solutions are
becoming the de facto methodology for building a monocular SLAM system. The
objective of this paper is threefold: first, the paper serves as a guideline
for people seeking to design their own monocular SLAM according to specific
environmental constraints. Second, it presents a survey that covers the various
keyframe-based monocular SLAM systems in the literature, detailing the
components of their implementation, and critically assessing the specific
strategies made in each proposed solution. Third, the paper provides insight
into the direction of future research in this field, to address the major
limitations still facing monocular SLAM; namely, in the issues of illumination
changes, initialization, highly dynamic motion, poorly textured scenes,
repetitive textures, map maintenance, and failure recovery
SPINS: Structure Priors aided Inertial Navigation System
Although Simultaneous Localization and Mapping (SLAM) has been an active
research topic for decades, current state-of-the-art methods still suffer from
instability or inaccuracy due to feature insufficiency or its inherent
estimation drift, in many civilian environments. To resolve these issues, we
propose a navigation system combing the SLAM and prior-map-based localization.
Specifically, we consider additional integration of line and plane features,
which are ubiquitous and more structurally salient in civilian environments,
into the SLAM to ensure feature sufficiency and localization robustness. More
importantly, we incorporate general prior map information into the SLAM to
restrain its drift and improve the accuracy. To avoid rigorous association
between prior information and local observations, we parameterize the prior
knowledge as low dimensional structural priors defined as relative
distances/angles between different geometric primitives. The localization is
formulated as a graph-based optimization problem that contains
sliding-window-based variables and factors, including IMU, heterogeneous
features, and structure priors. We also derive the analytical expressions of
Jacobians of different factors to avoid the automatic differentiation overhead.
To further alleviate the computation burden of incorporating structural prior
factors, a selection mechanism is adopted based on the so-called information
gain to incorporate only the most effective structure priors in the graph
optimization. Finally, the proposed framework is extensively tested on
synthetic data, public datasets, and, more importantly, on the real UAV flight
data obtained from a building inspection task. The results show that the
proposed scheme can effectively improve the accuracy and robustness of
localization for autonomous robots in civilian applications.Comment: 14 pages, 14 figure
Geometry-Aware Learning of Maps for Camera Localization
Maps are a key component in image-based camera localization and visual SLAM
systems: they are used to establish geometric constraints between images,
correct drift in relative pose estimation, and relocalize cameras after lost
tracking. The exact definitions of maps, however, are often
application-specific and hand-crafted for different scenarios (e.g. 3D
landmarks, lines, planes, bags of visual words). We propose to represent maps
as a deep neural net called MapNet, which enables learning a data-driven map
representation. Unlike prior work on learning maps, MapNet exploits cheap and
ubiquitous sensory inputs like visual odometry and GPS in addition to images
and fuses them together for camera localization. Geometric constraints
expressed by these inputs, which have traditionally been used in bundle
adjustment or pose-graph optimization, are formulated as loss terms in MapNet
training and also used during inference. In addition to directly improving
localization accuracy, this allows us to update the MapNet (i.e., maps) in a
self-supervised manner using additional unlabeled video sequences from the
scene. We also propose a novel parameterization for camera rotation which is
better suited for deep-learning based camera pose regression. Experimental
results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar
dataset show significant performance improvement over prior work. The MapNet
project webpage is https://goo.gl/mRB3Au.Comment: CVPR 2018 camera ready paper + supplementary materia
High-level environment representations for mobile robots
In most robotic applications we are faced with the problem of building
a digital representation of the environment that allows the robot to
autonomously complete its tasks. This internal representation can be
used by the robot to plan a motion trajectory for its mobile base
and/or end-effector. For most man-made environments we do not have
a digital representation or it is inaccurate. Thus, the robot must
have the capability of building it autonomously. This is done by
integrating into an internal data structure incoming sensor
measurements. For this purpose, a common solution consists in solving
the Simultaneous Localization and Mapping (SLAM) problem. The map
obtained by solving a SLAM problem is called ``metric'' and it
describes the geometric structure of the environment. A metric map is
typically made up of low-level primitives (like points or
voxels). This means that even though it represents the shape of the
objects in the robot workspace it lacks the information of which
object a surface belongs to. Having an object-level representation of
the environment has the advantage of augmenting the set of possible
tasks that a robot may accomplish. To this end, in this thesis we
focus on two aspects. We propose a formalism to represent in a uniform
manner 3D scenes consisting of different geometric primitives,
including points, lines and planes. Consequently, we derive a local
registration and a global optimization algorithm that can exploit this
representation for robust estimation. Furthermore, we present a
Semantic Mapping system capable of building an \textit{object-based}
map that can be used for complex task planning and execution. Our
system exploits effective reconstruction and recognition techniques
that require no a-priori information about the environment and can be
used under general conditions
- …