164 research outputs found
Do we need scan-matching in radar odometry?
There is a current increase in the development of "4D" Doppler-capable radar
and lidar range sensors that produce 3D point clouds where all points also have
information about the radial velocity relative to the sensor. 4D radars in
particular are interesting for object perception and navigation in
low-visibility conditions (dust, smoke) where lidars and cameras typically
fail. With the advent of high-resolution Doppler-capable radars comes the
possibility of estimating odometry from single point clouds, foregoing the need
for scan registration which is error-prone in feature-sparse field
environments. We compare several odometry estimation methods, from direct
integration of Doppler/IMU data and Kalman filter sensor fusion to 3D
scan-to-scan and scan-to-map registration, on three datasets with data from two
recent 4D radars and two IMUs. Surprisingly, our results show that the odometry
from Doppler and IMU data alone give similar or better results than 3D point
cloud registration. In our experiments, the average position error can be as
low as 0.3% over 1.8 and 4.5km trajectories. That allows accurate estimation of
6DOF ego-motion over long distances also in feature-sparse mine environments.
These results are useful not least for applications of navigation with
resource-constrained robot platforms in feature-sparse and low-visibility
conditions such as mining, construction, and search & rescue operations.Comment: Preprint. Submitted to ICRA 2024. 7 pages, 11 figure
Radar-Only Off-Road Local Navigation
Off-road robotics have traditionally utilized lidar for local navigation due
to its accuracy and high resolution. However, the limitations of lidar, such as
reduced performance in harsh environmental conditions and limited range, have
prompted the exploration of alternative sensing technologies. This paper
investigates the potential of radar for off-road local navigation, as it offers
the advantages of a longer range and the ability to penetrate dust and light
vegetation. We adapt existing lidar-based methods for radar and evaluate the
performance in comparison to lidar under various off-road conditions. We show
that radar can provide a significant range advantage over lidar while
maintaining accuracy for both ground plane estimation and obstacle detection.
And finally, we demonstrate successful autonomous navigation at a speed of 2.5
m/s over a path length of 350 m using only radar for ground plane estimation
and obstacle detection.Comment: 7 pages, 17 figures, ITSC 202
A New Wave in Robotics: Survey on Recent mmWave Radar Applications in Robotics
We survey the current state of millimeterwave (mmWave) radar applications in
robotics with a focus on unique capabilities, and discuss future opportunities
based on the state of the art. Frequency Modulated Continuous Wave (FMCW)
mmWave radars operating in the 76--81GHz range are an appealing alternative to
lidars, cameras and other sensors operating in the near visual spectrum. Radar
has been made more widely available in new packaging classes, more convenient
for robotics and its longer wavelengths have the ability to bypass visual
clutter such as fog, dust, and smoke. We begin by covering radar principles as
they relate to robotics. We then review the relevant new research across a
broad spectrum of robotics applications beginning with motion estimation,
localization, and mapping. We then cover object detection and classification,
and then close with an analysis of current datasets and calibration techniques
that provide entry points into radar research.Comment: 19 Pages, 11 Figures, 2 Tables, TRO Submission pendin
Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue with Autonomous Heterogeneous Robotic Systems
Search and Rescue (SAR) missions in harsh and unstructured Sub-Terranean
(Sub-T) environments in the presence of aerosol particles have recently become
the main focus in the field of robotics. Aerosol particles such as smoke and
dust directly affect the performance of any mobile robotic platform due to
their reliance on their onboard perception systems for autonomous navigation
and localization in Global Navigation Satellite System (GNSS)-denied
environments. Although obstacle avoidance and object detection algorithms are
robust to the presence of noise to some degree, their performance directly
relies on the quality of captured data by onboard sensors such as Light
Detection And Ranging (LiDAR) and camera. Thus, this paper proposes a novel
modular agnostic filtration pipeline based on intensity and spatial information
such as local point density for removal of detected smoke particles from Point
Cloud (PCL) prior to its utilization for collision detection. Furthermore, the
efficacy of the proposed framework in the presence of smoke during multiple
frontier exploration missions is investigated while the experimental results
are presented to facilitate comparison with other methodologies and their
computational impact. This provides valuable insight to the research community
for better utilization of filtration schemes based on available computation
resources while considering the safe autonomous navigation of mobile robots.Comment: Accepted in the 49th Annual Conference of the IEEE Industrial
Electronics Society [IECON2023
Lidar-level localization with radar? The CFEAR approach to accurate, fast and robust large-scale radar odometry in diverse environments
This paper presents an accurate, highly efficient, and learning-free method
for large-scale odometry estimation using spinning radar, empirically found to
generalize well across very diverse environments -- outdoors, from urban to
woodland, and indoors in warehouses and mines - without changing parameters.
Our method integrates motion compensation within a sweep with one-to-many scan
registration that minimizes distances between nearby oriented surface points
and mitigates outliers with a robust loss function. Extending our previous
approach CFEAR, we present an in-depth investigation on a wider range of data
sets, quantifying the importance of filtering, resolution, registration cost
and loss functions, keyframe history, and motion compensation. We present a new
solving strategy and configuration that overcomes previous issues with sparsity
and bias, and improves our state-of-the-art by 38%, thus, surprisingly,
outperforming radar SLAM and approaching lidar SLAM. The most accurate
configuration achieves 1.09% error at 5Hz on the Oxford benchmark, and the
fastest achieves 1.79% error at 160Hz.Comment: Accepted for publication in Transactions on Robotics. Edited
2022-11-07: Updated affiliation and citatio
Where Am I? SLAM for Mobile Machines on a Smart Working Site
The current optimization approaches of construction machinery are mainly based on internal sensors. However, the decision of a reasonable strategy is not only determined by its intrinsic signals, but also very strongly by environmental information, especially the terrain. Due to the dynamic changing of the construction site and the consequent absence of a high definition map, the Simultaneous Localization and Mapping (SLAM) offering the terrain information for construction machines is still challenging. Current SLAM technologies proposed for mobile machines are strongly dependent on costly or computationally expensive sensors, such as RTK GPS and cameras, so that commercial use is rare. In this study, we proposed an affordable SLAM method to create a multi-layer grid map for the construction site so that the machine can have the environmental information and be optimized accordingly. Concretely, after the machine passes by the grid, we can obtain the local information and record it. Combining with positioning technology, we then create a map of the interesting places of the construction site. As a result of our research gathered from Gazebo, we showed that a suitable layout is the combination of one IMU and two differential GPS antennas using the unscented Kalman filter, which keeps the average distance error lower than 2m and the mapping error lower than 1.3% in the harsh environment. As an outlook, our SLAM technology provides the cornerstone to activate many efficiency improvement approaches. View Full-Tex
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed
System Development of an Unmanned Ground Vehicle and Implementation of an Autonomous Navigation Module in a Mine Environment
There are numerous benefits to the insights gained from the exploration and exploitation of underground mines. There are also great risks and challenges involved, such as accidents that have claimed many lives. To avoid these accidents, inspections of the large mines were carried out by the miners, which is not always economically feasible and puts the safety of the inspectors at risk. Despite the progress in the development of robotic systems, autonomous navigation, localization and mapping algorithms, these environments remain particularly demanding for these systems. The successful implementation of the autonomous unmanned system will allow mine workers to autonomously determine the structural integrity of the roof and pillars through the generation of high-fidelity 3D maps. The generation of the maps will allow the miners to rapidly respond to any increasing hazards with proactive measures such as: sending workers to build/rebuild support structure to prevent accidents. The objective of this research is the development, implementation and testing of a robust unmanned ground vehicle (UGV) that will operate in mine environments for extended periods of time. To achieve this, a custom skid-steer four-wheeled UGV is designed to operate in these challenging underground mine environments. To autonomously navigate these environments, the UGV employs the use of a Light Detection and Ranging (LiDAR) and tactical grade inertial measurement unit (IMU) for the localization and mapping through a tightly-coupled LiDAR Inertial Odometry via Smoothing and Mapping framework (LIO-SAM). The autonomous navigation module was implemented based upon the Fast likelihood-based collision avoidance with an extension to human-guided navigation and a terrain traversability analysis framework. In order to successfully operate and generate high-fidelity 3D maps, the system was rigorously tested in different environments and terrain to verify its robustness. To assess the capabilities, several localization, mapping and autonomous navigation missions were carried out in a coal mine environment. These tests allowed for the verification and tuning of the system to be able to successfully autonomously navigate and generate high-fidelity maps
- …