215 research outputs found
Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape
Motivated by the tremendous progress we witnessed in recent years, this paper
presents a survey of the scientific literature on the topic of Collaborative
Simultaneous Localization and Mapping (C-SLAM), also known as multi-robot SLAM.
With fleets of self-driving cars on the horizon and the rise of multi-robot
systems in industrial applications, we believe that Collaborative SLAM will
soon become a cornerstone of future robotic applications. In this survey, we
introduce the basic concepts of C-SLAM and present a thorough literature
review. We also outline the major challenges and limitations of C-SLAM in terms
of robustness, communication, and resource management. We conclude by exploring
the area's current trends and promising research avenues.Comment: 44 pages, 3 figure
Data-Efficient Decentralized Visual SLAM
Decentralized visual simultaneous localization and mapping (SLAM) is a
powerful tool for multi-robot applications in environments where absolute
positioning systems are not available. Being visual, it relies on cameras,
cheap, lightweight and versatile sensors, and being decentralized, it does not
rely on communication to a central ground station. In this work, we integrate
state-of-the-art decentralized SLAM components into a new, complete
decentralized visual SLAM system. To allow for data association and
co-optimization, existing decentralized visual SLAM systems regularly exchange
the full map data between all robots, incurring large data transfers at a
complexity that scales quadratically with the robot count. In contrast, our
method performs efficient data association in two stages: in the first stage a
compact full-image descriptor is deterministically sent to only one robot. In
the second stage, which is only executed if the first stage succeeded, the data
required for relative pose estimation is sent, again to only one robot. Thus,
data association scales linearly with the robot count and uses highly compact
place representations. For optimization, a state-of-the-art decentralized
pose-graph optimization method is used. It exchanges a minimum amount of data
which is linear with trajectory overlap. We characterize the resulting system
and identify bottlenecks in its components. The system is evaluated on publicly
available data and we provide open access to the code.Comment: 8 pages, submitted to ICRA 201
Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems
This paper presents Kimera-Multi, the first multi-robot system that (i) is
robust and capable of identifying and rejecting incorrect inter and intra-robot
loop closures resulting from perceptual aliasing, (ii) is fully distributed and
only relies on local (peer-to-peer) communication to achieve distributed
localization and mapping, and (iii) builds a globally consistent
metric-semantic 3D mesh model of the environment in real-time, where faces of
the mesh are annotated with semantic labels. Kimera-Multi is implemented by a
team of robots equipped with visual-inertial sensors. Each robot builds a local
trajectory estimate and a local mesh using Kimera. When communication is
available, robots initiate a distributed place recognition and robust pose
graph optimization protocol based on a novel distributed graduated
non-convexity algorithm. The proposed protocol allows the robots to improve
their local trajectory estimates by leveraging inter-robot loop closures while
being robust to outliers. Finally, each robot uses its improved trajectory
estimate to correct the local mesh using mesh deformation techniques.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking
datasets, and challenging outdoor datasets collected using ground robots. Both
real and simulated experiments involve long trajectories (e.g., up to 800
meters per robot). The experiments show that Kimera-Multi (i) outperforms the
state of the art in terms of robustness and accuracy, (ii) achieves estimation
errors comparable to a centralized SLAM system while being fully distributed,
(iii) is parsimonious in terms of communication bandwidth, (iv) produces
accurate metric-semantic 3D meshes, and (v) is modular and can be also used for
standard 3D reconstruction (i.e., without semantic labels) or for trajectory
estimation (i.e., without reconstructing a 3D mesh).Comment: Accepted by IEEE Transactions on Robotics (18 pages, 15 figures
Asynchronous Distributed Smoothing and Mapping via On-Manifold Consensus ADMM
In this paper we present a fully distributed, asynchronous, and general
purpose optimization algorithm for Consensus Simultaneous Localization and
Mapping (CSLAM). Multi-robot teams require that agents have timely and accurate
solutions to their state as well as the states of the other robots in the team.
To optimize this solution we develop a CSLAM back-end based on Consensus ADMM
called MESA (Manifold, Edge-based, Separable ADMM). MESA is fully distributed
to tolerate failures of individual robots, asynchronous to tolerate practical
network conditions, and general purpose to handle any CSLAM problem
formulation. We demonstrate that MESA exhibits superior convergence rates and
accuracy compare to existing state-of-the art CSLAM back-end optimizers
Relative Transformation Estimation Based on Fusion of Odometry and UWB Ranging Data
In this work, the problem of 4 degree-of-freedom (3D position and heading)
robot-to-robot relative frame transformation estimation using onboard odometry
and inter-robot distance measurements is studied. Firstly, we present a
theoretical analysis of the problem, namely the derivation and interpretation
of the Cramer-Rao Lower Bound (CRLB), the Fisher Information Matrix (FIM) and
its determinant. Secondly, we propose optimization-based methods to solve the
problem, including a quadratically constrained quadratic programming (QCQP) and
the corresponding semidefinite programming (SDP) relaxation. Moreover, we
address practical issues that are ignored in previous works, such as accounting
for spatial-temporal offsets between the ultra-wideband (UWB) and odometry
sensors, rejecting UWB outliers and checking for singular configurations before
commencing operation. Lastly, extensive simulations and real-life experiments
with aerial robots show that the proposed QCQP and SDP methods outperform
state-of-the-art methods, especially in geometrically poor or large measurement
noise conditions. In general, the QCQP method provides the best results at the
expense of computational time, while the SDP method runs much faster and is
sufficiently accurate in most cases
Distributed Simultaneous Localisation and Auto-Calibration using Gaussian Belief Propagation
We present a novel scalable, fully distributed, and online method for
simultaneous localisation and extrinsic calibration for multi-robot setups.
Individual a priori unknown robot poses are probabilistically inferred as
robots sense each other while simultaneously calibrating their sensors and
markers extrinsic using Gaussian Belief Propagation. In the presented
experiments, we show how our method not only yields accurate robot localisation
and auto-calibration but also is able to perform under challenging
circumstances such as highly noisy measurements, significant communication
failures or limited communication range.Comment: Published in IEEE Robotics and Automation Letters (RA-L) 202
Collaborative Perception From Data Association To Localization
During the last decade, visual sensors have become ubiquitous. One or more cameras
can be found in devices ranging from smartphones to unmanned aerial vehicles and
autonomous cars. During the same time, we have witnessed the emergence of large
scale networks ranging from sensor networks to robotic swarms.
Assume multiple visual sensors perceive the same scene from different viewpoints. In
order to achieve consistent perception, the problem of correspondences between ob-
served features must be first solved. Then, it is often necessary to perform distributed
localization, i.e. to estimate the pose of each agent with respect to a global reference
frame. Having everything set in the same coordinate system and everything having
the same meaning for all agents, operation of the agents and interpretation of the
jointly observed scene become possible.
The questions we address in this thesis are the following: first, can a group of visual
sensors agree on what they see, in a decentralized fashion? This is the problem of
collaborative data association. Then, based on what they see, can the visual sensors
agree on where they are, in a decentralized fashion as well? This is the problem of
cooperative localization.
The contributions of this work are five-fold. We are the first to address the problem
of consistent multiway matching in a decentralized setting. Secondly, we propose
an efficient decentralized dynamical systems approach for computing any number of
smallest eigenvalues and the associated eigenvectors of a weighted graph with global
convergence guarantees with direct applications in group synchronization problems,
e.g. permutations or rotations synchronization. Thirdly, we propose a state-of-the
art framework for decentralized collaborative localization for mobile agents under
the presence of unknown cross-correlations by solving a minimax optimization prob-
lem to account for the missing information. Fourthly, we are the first to present an
approach to the 3-D rotation localization of a camera sensor network from relative
bearing measurements. Lastly, we focus on the case of a group of three visual sensors.
We propose a novel Riemannian geometric representation of the trifocal tensor which
relates projections of points and lines in three overlapping views. The aforemen-
tioned representation enables the use of the state-of-the-art optimization methods on
Riemannian manifolds and the use of robust averaging techniques for estimating the
trifocal tensor
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …