6 research outputs found
Learning-based Localizability Estimation for Robust LiDAR Localization
LiDAR-based localization and mapping is one of the core components in many
modern robotic systems due to the direct integration of range and geometry,
allowing for precise motion estimation and generation of high quality maps in
real-time. Yet, as a consequence of insufficient environmental constraints
present in the scene, this dependence on geometry can result in localization
failure, happening in self-symmetric surroundings such as tunnels. This work
addresses precisely this issue by proposing a neural network-based estimation
approach for detecting (non-)localizability during robot operation. Special
attention is given to the localizability of scan-to-scan registration, as it is
a crucial component in many LiDAR odometry estimation pipelines. In contrast to
previous, mostly traditional detection approaches, the proposed method enables
early detection of failure by estimating the localizability on raw sensor
measurements without evaluating the underlying registration optimization.
Moreover, previous approaches remain limited in their ability to generalize
across environments and sensor types, as heuristic-tuning of degeneracy
detection thresholds is required. The proposed approach avoids this problem by
learning from a collection of different environments, allowing the network to
function over various scenarios. Furthermore, the network is trained
exclusively on simulated data, avoiding arduous data collection in challenging
and degenerate, often hard-to-access, environments. The presented method is
tested during field experiments conducted across challenging environments and
on two different sensor types without any modifications. The observed detection
performance is on par with state-of-the-art methods after environment-specific
threshold tuning.Comment: 8 pages, 7 figures, 4 table
X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments
Modern robotic systems are required to operate in challenging environments,
which demand reliable localization under challenging conditions. LiDAR-based
localization methods, such as the Iterative Closest Point (ICP) algorithm, can
suffer in geometrically uninformative environments that are known to
deteriorate point cloud registration performance and push optimization toward
divergence along weakly constrained directions. To overcome this issue, this
work proposes i) a robust fine-grained localizability detection module, and ii)
a localizability-aware constrained ICP optimization module, which couples with
the localizability detection module in a unified manner. The proposed
localizability detection is achieved by utilizing the correspondences between
the scan and the map to analyze the alignment strength against the principal
directions of the optimization as part of its fine-grained LiDAR localizability
analysis. In the second part, this localizability analysis is then integrated
into the scan-to-map point cloud registration to generate drift-free pose
updates by enforcing controlled updates or leaving the degenerate directions of
the optimization unchanged. The proposed method is thoroughly evaluated and
compared to state-of-the-art methods in simulated and real-world experiments,
demonstrating the performance and reliability improvement in LiDAR-challenging
environments. In all experiments, the proposed framework demonstrates accurate
and generalizable localizability detection and robust pose estimation without
environment-specific parameter tuning.Comment: 20 Pages, 20 Figures Submitted to IEEE Transactions On Robotics.
Supplementary Video: https://youtu.be/SviLl7q69aA Project Website:
https://sites.google.com/leggedrobotics.com/x-ic
A LiDAR-Inertial SLAM Tightly-Coupled with Dropout-Tolerant GNSS Fusion for Autonomous Mine Service Vehicles
Multi-modal sensor integration has become a crucial prerequisite for the
real-world navigation systems. Recent studies have reported successful
deployment of such system in many fields. However, it is still challenging for
navigation tasks in mine scenes due to satellite signal dropouts, degraded
perception, and observation degeneracy. To solve this problem, we propose a
LiDAR-inertial odometry method in this paper, utilizing both Kalman filter and
graph optimization. The front-end consists of multiple parallel running
LiDAR-inertial odometries, where the laser points, IMU, and wheel odometer
information are tightly fused in an error-state Kalman filter. Instead of the
commonly used feature points, we employ surface elements for registration. The
back-end construct a pose graph and jointly optimize the pose estimation
results from inertial, LiDAR odometry, and global navigation satellite system
(GNSS). Since the vehicle has a long operation time inside the tunnel, the
largely accumulated drift may be not fully by the GNSS measurements. We hereby
leverage a loop closure based re-initialization process to achieve full
alignment. In addition, the system robustness is improved through handling data
loss, stream consistency, and estimation error. The experimental results show
that our system has a good tolerance to the long-period degeneracy with the
cooperation different LiDARs and surfel registration, achieving meter-level
accuracy even for tens of minutes running during GNSS dropouts
INSTRUCTIONS FOR PREPARATION OF CAMERA-READY MANUSCRIPTS FOR BULLETIN OF GRADUATE SCIENCE AND ENGINEERING, ENGINEERING STUDIES
In the field of autonomous mobile robotics, reliable localization performance is essential. However, there are real environments where localization is a failure. In this paper, we propose a method for estimating localizability based on occupancy grid maps. Localizability indicates the reliability of localization. There are several approaches to estimate localizability, and we propose a method using local map correlations. The covariance matrix of the Gaussian distribution from local map correlations is used to estimate localizability. In this way, we can estimate the magnitude of the localization error and the characteristics of the error. The experiment confirmed the characteristics of the distribution of correlations for each location on occupancy grid maps. And the localizability of the whole map was estimated using an occupancy grid map containing a vast and complex. The simulation experiment results showed that the proposed method could estimate localization error and the characteristics of the error on occupancy grid maps. The proposed method was confirmed to be effective in estimating localizability
Robot Localization in Tunnels: Combining Discrete Features in a Pose Graph Framework; 35214292
Robot localization inside tunnels is a challenging task due to the special conditions of these environments. The GPS-denied nature of these scenarios, coupled with the low visibility, slippery and irregular surfaces, and lack of distinguishable visual and structural features, make traditional robotics methods based on cameras, lasers, or wheel encoders unreliable. Fortunately, tunnels provide other types of valuable information that can be used for localization purposes. On the one hand, radio frequency signal propagation in these types of scenarios shows a predictable periodic structure (periodic fadings) under certain settings, and on the other hand, tunnels present structural characteristics (e.g., galleries, emergency shelters) that must comply with safety regulations. The solution presented in this paper consists of detecting both types of features to be introduced as discrete sources of information in an alternative graph-based localization approach. The results obtained from experiments conducted in a real tunnel demonstrate the validity and suitability of the proposed system for inspection applications. © 2022 by the authors. Licensee MDPI, Basel, Switzerland
Reinforcement learning-based autonomous robot navigation and tracking
Autonomous navigation requires determining a collision-free path for a mobile robot
using only partial observations of the environment. This capability is highly needed
for a wide range of applications, such as search and rescue operations, surveillance,
environmental monitoring, and domestic service robots. In many scenarios, an accurate global map is not available beforehand, posing significant challenges for a robot
planning its path. This type of navigation is often referred to as Mapless Navigation,
and such work is not limited to only Unmanned Ground Vehicle (UGV) but also
other vehicles, such as Unmanned Aerial Vehicles (UAV) and more. This research
aims to develop Reinforcement Learning (RL)-based methods for autonomous navigation for mobile robots, as well as effective tracking strategies for a UAV to follow
a moving target.
Mapless navigation usually assumes accurate localisation, which is unrealistic.
In the real world, localisation methods, such as simultaneous localisation and mapping (SLAM), are needed. However, the localisation performance could deteriorate
depending on the environment and observation quality. Therefore, To avoid de-teriorated localisation, this work introduces an RL-based navigation algorithm to
enable mobile robots to navigate in unknown environments, while incorporating
localisation performance in training the policy. Specifically, a localisation-related
penalty is introduced in the reward space, ensuring localisation safety is taken into
consideration during navigation. Different metrics are formulated to identify if the
localisation performance starts to deteriorate in order to penalise the robot. As such, the navigation policy will not only optimise its paths in terms of travel distance and
collision avoidance towards the goal but also avoid venturing into areas that pose
challenges for localisation algorithms.
The localisation-safe algorithm is further extended to UAV navigation, which
uses image-based observations. Instead of deploying an end-to-end control pipeline,
this work establishes a hierarchical control framework that leverages both the capabilities of neural networks for perception and the stability and safety guarantees of
conventional controllers. The high-level controller in this hierarchical framework is a
neural network policy with semantic image inputs, trained using RL algorithms with
localisation-related rewards. The efficacy of the trained policy is demonstrated in
real-world experiments for localisation-safe navigation, and, notably, it exhibits effectiveness without the need for retraining, thanks to the hierarchical control scheme
and semantic inputs. Last, a tracking policy is introduced to enable a UAV to track a moving target. This study designs a reward space, enabling a vision-based UAV, which utilises
depth images for perception, to follow a target within a safe and visible range. The
objective is to maintain the mobile target at the centre of the drone camera’s image
without being occluded by other objects and to avoid collisions with obstacles. It
is observed that training such a policy from scratch may lead to local minima. To
address this, a state-based teacher policy is trained to perform the tracking task,
with environmental perception relying on direct access to state information, including position coordinates of obstacles, instead of depth images. An RL algorithm is
then constructed to train the vision-based policy, incorporating behavioural guidance from the state-based teacher policy. This approach yields promising tracking
performance