1,946 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
High-Precision Localization Using Ground Texture
Location-aware applications play an increasingly critical role in everyday
life. However, satellite-based localization (e.g., GPS) has limited accuracy
and can be unusable in dense urban areas and indoors. We introduce an
image-based global localization system that is accurate to a few millimeters
and performs reliable localization both indoors and outside. The key idea is to
capture and index distinctive local keypoints in ground textures. This is based
on the observation that ground textures including wood, carpet, tile, concrete,
and asphalt may look random and homogeneous, but all contain cracks, scratches,
or unique arrangements of fibers. These imperfections are persistent, and can
serve as local features. Our system incorporates a downward-facing camera to
capture the fine texture of the ground, together with an image processing
pipeline that locates the captured texture patch in a compact database
constructed offline. We demonstrate the capability of our system to robustly,
accurately, and quickly locate test images on various types of outdoor and
indoor ground surfaces
Global Localization in Unstructured Environments using Semantic Object Maps Built from Various Viewpoints
We present a novel framework for global localization and guided
relocalization of a vehicle in an unstructured environment. Compared to
existing methods, our pipeline does not rely on cues from urban fixtures (e.g.,
lane markings, buildings), nor does it make assumptions that require the
vehicle to be navigating on a road network. Instead, we achieve localization in
both urban and non-urban environments by robustly associating and registering
the vehicle's local semantic object map with a compact semantic reference map,
potentially built from other viewpoints, time periods, and/or modalities.
Robustness to noise, outliers, and missing objects is achieved through our
graph-based data association algorithm. Further, the guided relocalization
capability of our pipeline mitigates drift inherent in odometry-based
localization after the initial global localization. We evaluate our pipeline on
two publicly-available, real-world datasets to demonstrate its effectiveness at
global localization in both non-urban and urban environments. The Katwijk Beach
Planetary Rover dataset is used to show our pipeline's ability to perform
accurate global localization in unstructured environments. Demonstrations on
the KITTI dataset achieve an average pose error of 3.8m across all 35
localization events on Sequence 00 when localizing in a reference map created
from aerial images. Compared to existing works, our pipeline is more general
because it can perform global localization in unstructured environments using
maps built from different viewpoints.Comment: 8 pages, 6 figures, presented at IROS 202
RANSAC for Robotic Applications: A Survey
Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.This work has been partially funded by the Basque Government, Spain, under Research Teams Grant number IT1427-22 and under ELKARTEK LANVERSO Grant number KK-2022/00065; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737
- …