2,272 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Efficient Constellation-Based Map-Merging for Semantic SLAM
Data association in SLAM is fundamentally challenging, and handling ambiguity
well is crucial to achieve robust operation in real-world environments. When
ambiguous measurements arise, conservatism often mandates that the measurement
is discarded or a new landmark is initialized rather than risking an incorrect
association. To address the inevitable `duplicate' landmarks that arise, we
present an efficient map-merging framework to detect duplicate constellations
of landmarks, providing a high-confidence loop-closure mechanism well-suited
for object-level SLAM. This approach uses an incrementally-computable
approximation of landmark uncertainty that only depends on local information in
the SLAM graph, avoiding expensive recovery of the full system covariance
matrix. This enables a search based on geometric consistency (GC) (rather than
full joint compatibility (JC)) that inexpensively reduces the search space to a
handful of `best' hypotheses. Furthermore, we reformulate the commonly-used
interpretation tree to allow for more efficient integration of clique-based
pairwise compatibility, accelerating the branch-and-bound max-cardinality
search. Our method is demonstrated to match the performance of full JC methods
at significantly-reduced computational cost, facilitating robust object-based
loop-closure over large SLAM problems.Comment: Accepted to IEEE International Conference on Robotics and Automation
(ICRA) 201
Localization from semantic observations via the matrix permanent
Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization
Probabilistic Global Scale Estimation for MonoSLAM Based on Generic Object Detection
This paper proposes a novel method to estimate the global scale of a 3D
reconstructed model within a Kalman filtering-based monocular SLAM algorithm.
Our Bayesian framework integrates height priors over the detected objects
belonging to a set of broad predefined classes, based on recent advances in
fast generic object detection. Each observation is produced on single frames,
so that we do not need a data association process along video frames. This is
because we associate the height priors with the image region sizes at image
places where map features projections fall within the object detection regions.
We present very promising results of this approach obtained on several
experiments with different object classes.Comment: Int. Workshop on Visual Odometry, CVPR, (July 2017
Network Uncertainty Informed Semantic Feature Selection for Visual SLAM
In order to facilitate long-term localization using a visual simultaneous
localization and mapping (SLAM) algorithm, careful feature selection can help
ensure that reference points persist over long durations and the runtime and
storage complexity of the algorithm remain consistent. We present SIVO
(Semantically Informed Visual Odometry and Mapping), a novel
information-theoretic feature selection method for visual SLAM which
incorporates semantic segmentation and neural network uncertainty into the
feature selection pipeline. Our algorithm selects points which provide the
highest reduction in Shannon entropy between the entropy of the current state
and the joint entropy of the state, given the addition of the new feature with
the classification entropy of the feature from a Bayesian neural network. Each
selected feature significantly reduces the uncertainty of the vehicle state and
has been detected to be a static object (building, traffic sign, etc.)
repeatedly with a high confidence. This selection strategy generates a sparse
map which can facilitate long-term localization. The KITTI odometry dataset is
used to evaluate our method, and we also compare our results against ORB_SLAM2.
Overall, SIVO performs comparably to the baseline method while reducing the map
size by almost 70%.Comment: Published in: 2019 16th Conference on Computer and Robot Vision (CRV
- …