1,587 research outputs found
A Survey on Global LiDAR Localization
Knowledge about the own pose is key for all mobile robot applications. Thus
pose estimation is part of the core functionalities of mobile robots. In the
last two decades, LiDAR scanners have become a standard sensor for robot
localization and mapping. This article surveys recent progress and advances in
LiDAR-based global localization. We start with the problem formulation and
explore the application scope. We then present the methodology review covering
various global localization topics, such as maps, descriptor extraction, and
consistency checks. The contents are organized under three themes. The first is
the combination of global place retrieval and local pose estimation. Then the
second theme is upgrading single-shot measurement to sequential ones for
sequential global localization. The third theme is extending single-robot
global localization to cross-robot localization on multi-robot systems. We end
this survey with a discussion of open challenges and promising directions on
global lidar localization
Correspondenceless scan-to-map-scan matching of homoriented 2D scans for mobile robot localisation
The objective of this study is improving the location estimate of a mobile
robot capable of motion on a plane and mounted with a conventional 2D LIDAR
sensor, given an initial guess for its location on a 2D map of its
surroundings. Documented herein is the theoretical reasoning behind solving a
matching problem between two homoriented 2D scans, one derived from the robot's
physical sensor and one derived by simulating its operation within the map, in
a manner that does not require the establishing of correspondences between
their constituting rays. Two results are proved and subsequently shown through
experiments. The first is that the true position of the sensor can be recovered
with arbitrary precision when the physical sensor reports faultless
measurements and there is no discrepancy between the environment the robot
operates in and its perception of it by the robot. The second is that when
either is affected by disturbance, the location estimate is bound in a
neighbourhood of the true location whose radius is proportional to the
affecting disturbance.Comment: 19 pages, 19 figure
Advances in Sonar Technology
The demand to explore the largest and also one of the richest parts of our planet, the advances in signal processing promoted by an exponential growth in computation power and a thorough study of sound propagation in the underwater realm, have lead to remarkable advances in sonar technology in the last years.The work on hand is a sum of knowledge of several authors who contributed in various aspects of sonar technology. This book intends to give a broad overview of the advances in sonar technology of the last years that resulted from the research effort of the authors in both sonar systems and their applications. It is intended for scientist and engineers from a variety of backgrounds and even those that never had contact with sonar technology before will find an easy introduction with the topics and principles exposed here
Radar-only ego-motion estimation in difficult settings via graph matching
Radar detects stable, long-range objects under variable weather and lighting
conditions, making it a reliable and versatile sensor well suited for
ego-motion estimation. In this work, we propose a radar-only odometry pipeline
that is highly robust to radar artifacts (e.g., speckle noise and false
positives) and requires only one input parameter. We demonstrate its ability to
adapt across diverse settings, from urban UK to off-road Iceland, achieving a
scan matching accuracy of approximately 5.20 cm and 0.0929 deg when using GPS
as ground truth (compared to visual odometry's 5.77 cm and 0.1032 deg). We
present algorithms for keypoint extraction and data association, framing the
latter as a graph matching optimization problem, and provide an in-depth system
analysis.Comment: 6 content pages, 1 page of references, 5 figures, 4 tables, 2019 IEEE
International Conference on Robotics and Automation (ICRA
Milli-RIO: Ego-Motion Estimation with Low-Cost Millimetre-Wave Radar
Robust indoor ego-motion estimation has attracted significant interest in the
last decades due to the fast-growing demand for location-based services in
indoor environments. Among various solutions, frequency-modulated
continuous-wave (FMCW) radar sensors in millimeter-wave (MMWave) spectrum are
gaining more prominence due to their intrinsic advantages such as penetration
capability and high accuracy. Single-chip low-cost MMWave radar as an emerging
technology provides an alternative and complementary solution for robust
ego-motion estimation, making it feasible in resource-constrained platforms
thanks to low-power consumption and easy system integration. In this paper, we
introduce Milli-RIO, an MMWave radar-based solution making use of a single-chip
low-cost radar and inertial measurement unit sensor to estimate
six-degrees-of-freedom ego-motion of a moving radar. Detailed quantitative and
qualitative evaluations prove that the proposed method achieves precisions on
the order of few centimeters for indoor localization tasks.Comment: Submitted to IEEE Sensors, 9page
Long-term experiments with an adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability
LiLO: Lightweight and low-bias LiDAR Odometry method based on spherical range image filtering
In unstructured outdoor environments, robotics requires accurate and
efficient odometry with low computational time. Existing low-bias LiDAR
odometry methods are often computationally expensive. To address this problem,
we present a lightweight LiDAR odometry method that converts unorganized point
cloud data into a spherical range image (SRI) and filters out surface, edge,
and ground features in the image plane. This substantially reduces computation
time and the required features for odometry estimation in LOAM-based
algorithms. Our odometry estimation method does not rely on global maps or loop
closure algorithms, which further reduces computational costs. Experimental
results generate a translation and rotation error of 0.86\% and 0.0036{\deg}/m
on the KITTI dataset with an average runtime of 78ms. In addition, we tested
the method with our data, obtaining an average closed-loop error of 0.8m and a
runtime of 27ms over eight loops covering 3.5Km.Comment: This paper is under review at the journal "Autonomous Robots"
(Springer
Indoor simultaneous localization and mapping based on fringe projection profilometry
Simultaneous Localization and Mapping (SLAM) plays an important role in
outdoor and indoor applications ranging from autonomous driving to indoor
robotics. Outdoor SLAM has been widely used with the assistance of LiDAR or
GPS. For indoor applications, the LiDAR technique does not satisfy the accuracy
requirement and the GPS signals will be lost. An accurate and efficient scene
sensing technique is required for indoor SLAM. As the most promising 3D sensing
technique, the opportunities for indoor SLAM with fringe projection
profilometry (FPP) systems are obvious, but methods to date have not fully
leveraged the accuracy and speed of sensing that such systems offer. In this
paper, we propose a novel FPP-based indoor SLAM method based on the coordinate
transformation relationship of FPP, where the 2D-to-3D descriptor-assisted is
used for mapping and localization. The correspondences generated by matching
descriptors are used for fast and accurate mapping, and the transform
estimation between the 2D and 3D descriptors is used to localize the sensor.
The provided experimental results demonstrate that the proposed indoor SLAM can
achieve the localization and mapping accuracy around one millimeter
Visual Perception For Robotic Spatial Understanding
Humans understand the world through vision without much effort. We perceive the structure, objects, and people in the environment and pay little direct attention to most of it, until it becomes useful. Intelligent systems, especially mobile robots, have no such biologically engineered vision mechanism to take for granted. In contrast, we must devise algorithmic methods of taking raw sensor data and converting it to something useful very quickly. Vision is such a necessary part of building a robot or any intelligent system that is meant to interact with the world that it is somewhat surprising we don\u27t have off-the-shelf libraries for this capability.
Why is this? The simple answer is that the problem is extremely difficult. There has been progress, but the current state of the art is impressive and depressing at the same time. We now have neural networks that can recognize many objects in 2D images, in some cases performing better than a human. Some algorithms can also provide bounding boxes or pixel-level masks to localize the object. We have visual odometry and mapping algorithms that can build reasonably detailed maps over long distances with the right hardware and conditions. On the other hand, we have robots with many sensors and no efficient way to compute their relative extrinsic poses for integrating the data in a single frame. The same networks that produce good object segmentations and labels in a controlled benchmark still miss obvious objects in the real world and have no mechanism for learning on the fly while the robot is exploring. Finally, while we can detect pose for very specific objects, we don\u27t yet have a mechanism that detects pose that generalizes well over categories or that can describe new objects efficiently.
We contribute algorithms in four of the areas mentioned above. First, we describe a practical and effective system for calibrating many sensors on a robot with up to 3 different modalities. Second, we present our approach to visual odometry and mapping that exploits the unique capabilities of RGB-D sensors to efficiently build detailed representations of an environment. Third, we describe a 3-D over-segmentation technique that utilizes the models and ego-motion output in the previous step to generate temporally consistent segmentations with camera motion. Finally, we develop a synthesized dataset of chair objects with part labels and investigate the influence of parts on RGB-D based object pose recognition using a novel network architecture we call PartNet
- …