13,467 research outputs found
From Monocular SLAM to Autonomous Drone Exploration
Micro aerial vehicles (MAVs) are strongly limited in their payload and power
capacity. In order to implement autonomous navigation, algorithms are therefore
desirable that use sensory equipment that is as small, low-weight, and
low-power consuming as possible. In this paper, we propose a method for
autonomous MAV navigation and exploration using a low-cost consumer-grade
quadrocopter equipped with a monocular camera. Our vision-based navigation
system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense
reconstruction of the environment in real-time. Since LSD-SLAM only determines
depth at high gradient pixels, texture-less areas are not directly observed so
that previous exploration methods that assume dense map information cannot
directly be applied. We propose an obstacle mapping and exploration approach
that takes the properties of our semi-dense monocular SLAM system into account.
In experiments, we demonstrate our vision-based autonomous navigation and
exploration system with a Parrot Bebop MAV
Keyframe-based monocular SLAM: design, survey, and future directions
Extensive research in the field of monocular SLAM for the past fifteen years
has yielded workable systems that found their way into various applications in
robotics and augmented reality. Although filter-based monocular SLAM systems
were common at some time, the more efficient keyframe-based solutions are
becoming the de facto methodology for building a monocular SLAM system. The
objective of this paper is threefold: first, the paper serves as a guideline
for people seeking to design their own monocular SLAM according to specific
environmental constraints. Second, it presents a survey that covers the various
keyframe-based monocular SLAM systems in the literature, detailing the
components of their implementation, and critically assessing the specific
strategies made in each proposed solution. Third, the paper provides insight
into the direction of future research in this field, to address the major
limitations still facing monocular SLAM; namely, in the issues of illumination
changes, initialization, highly dynamic motion, poorly textured scenes,
repetitive textures, map maintenance, and failure recovery
Data-Efficient Decentralized Visual SLAM
Decentralized visual simultaneous localization and mapping (SLAM) is a
powerful tool for multi-robot applications in environments where absolute
positioning systems are not available. Being visual, it relies on cameras,
cheap, lightweight and versatile sensors, and being decentralized, it does not
rely on communication to a central ground station. In this work, we integrate
state-of-the-art decentralized SLAM components into a new, complete
decentralized visual SLAM system. To allow for data association and
co-optimization, existing decentralized visual SLAM systems regularly exchange
the full map data between all robots, incurring large data transfers at a
complexity that scales quadratically with the robot count. In contrast, our
method performs efficient data association in two stages: in the first stage a
compact full-image descriptor is deterministically sent to only one robot. In
the second stage, which is only executed if the first stage succeeded, the data
required for relative pose estimation is sent, again to only one robot. Thus,
data association scales linearly with the robot count and uses highly compact
place representations. For optimization, a state-of-the-art decentralized
pose-graph optimization method is used. It exchanges a minimum amount of data
which is linear with trajectory overlap. We characterize the resulting system
and identify bottlenecks in its components. The system is evaluated on publicly
available data and we provide open access to the code.Comment: 8 pages, submitted to ICRA 201
GSLAM: Initialization-robust Monocular Visual SLAM via Global Structure-from-Motion
Many monocular visual SLAM algorithms are derived from incremental
structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM
method which integrates recent advances made in global SfM. In particular, we
present two main contributions to visual SLAM. First, we solve the visual
odometry problem by a novel rank-1 matrix factorization technique which is more
robust to the errors in map initialization. Second, we adopt a recent global
SfM method for the pose-graph optimization, which leads to a multi-stage linear
formulation and enables L1 optimization for better robustness to false loops.
The combination of these two approaches generates more robust reconstruction
and is significantly faster (4X) than recent state-of-the-art SLAM systems. We
also present a new dataset recorded with ground truth camera motion in a Vicon
motion capture room, and compare our method to prior systems on it and
established benchmark datasets.Comment: 3DV 2017 Project Page: https://frobelbest.github.io/gsla
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …