30 research outputs found
LiDAR-guided object search and detection in Subterranean Environments
Detecting objects of interest, such as human survivors, safety equipment, and
structure access points, is critical to any search-and-rescue operation. Robots
deployed for such time-sensitive efforts rely on their onboard sensors to
perform their designated tasks. However, as disaster response operations are
predominantly conducted under perceptually degraded conditions, commonly
utilized sensors such as visual cameras and LiDARs suffer in terms of
performance degradation. In response, this work presents a method that utilizes
the complementary nature of vision and depth sensors to leverage multi-modal
information to aid object detection at longer distances. In particular, depth
and intensity values from sparse LiDAR returns are used to generate proposals
for objects present in the environment. These proposals are then utilized by a
Pan-Tilt-Zoom (PTZ) camera system to perform a directed search by adjusting its
pose and zoom level for performing object detection and classification in
difficult environments. The proposed work has been thoroughly verified using an
ANYmal quadruped robot in underground settings and on datasets collected during
the DARPA Subterranean Challenge finals.Comment: 6 pages, 5 Figures, 2 Tables, conference: IEEE International
Symposium on Safety, Security and Rescue Robotics (SSRR-2022), Seville, Spai
Locomotion Policy Guided Traversability Learning using Volumetric Representations of Complex Environments
Despite the progress in legged robotic locomotion, autonomous navigation in
unknown environments remains an open problem. Ideally, the navigation system
utilizes the full potential of the robots' locomotion capabilities while
operating within safety limits under uncertainty. The robot must sense and
analyze the traversability of the surrounding terrain, which depends on the
hardware, locomotion control, and terrain properties. It may contain
information about the risk, energy, or time consumption needed to traverse the
terrain. To avoid hand-crafted traversability cost functions we propose to
collect traversability information about the robot and locomotion policy by
simulating the traversal over randomly generated terrains using a physics
simulator. Thousand of robots are simulated in parallel controlled by the same
locomotion policy used in reality to acquire 57 years of real-world locomotion
experience equivalent. For deployment on the real robot, a sparse convolutional
network is trained to predict the simulated traversability cost, which is
tailored to the deployed locomotion policy, from an entirely geometric
representation of the environment in the form of a 3D voxel-occupancy map. This
representation avoids the need for commonly used elevation maps, which are
error-prone in the presence of overhanging obstacles and multi-floor or
low-ceiling scenarios. The effectiveness of the proposed traversability
prediction network is demonstrated for path planning for the legged robot
ANYmal in various indoor and natural environments.Comment: accepted for 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2022
Learning-based Localizability Estimation for Robust LiDAR Localization
LiDAR-based localization and mapping is one of the core components in many
modern robotic systems due to the direct integration of range and geometry,
allowing for precise motion estimation and generation of high quality maps in
real-time. Yet, as a consequence of insufficient environmental constraints
present in the scene, this dependence on geometry can result in localization
failure, happening in self-symmetric surroundings such as tunnels. This work
addresses precisely this issue by proposing a neural network-based estimation
approach for detecting (non-)localizability during robot operation. Special
attention is given to the localizability of scan-to-scan registration, as it is
a crucial component in many LiDAR odometry estimation pipelines. In contrast to
previous, mostly traditional detection approaches, the proposed method enables
early detection of failure by estimating the localizability on raw sensor
measurements without evaluating the underlying registration optimization.
Moreover, previous approaches remain limited in their ability to generalize
across environments and sensor types, as heuristic-tuning of degeneracy
detection thresholds is required. The proposed approach avoids this problem by
learning from a collection of different environments, allowing the network to
function over various scenarios. Furthermore, the network is trained
exclusively on simulated data, avoiding arduous data collection in challenging
and degenerate, often hard-to-access, environments. The presented method is
tested during field experiments conducted across challenging environments and
on two different sensor types without any modifications. The observed detection
performance is on par with state-of-the-art methods after environment-specific
threshold tuning.Comment: 8 pages, 7 figures, 4 table
X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments
Modern robotic systems are required to operate in challenging environments,
which demand reliable localization under challenging conditions. LiDAR-based
localization methods, such as the Iterative Closest Point (ICP) algorithm, can
suffer in geometrically uninformative environments that are known to
deteriorate point cloud registration performance and push optimization toward
divergence along weakly constrained directions. To overcome this issue, this
work proposes i) a robust fine-grained localizability detection module, and ii)
a localizability-aware constrained ICP optimization module, which couples with
the localizability detection module in a unified manner. The proposed
localizability detection is achieved by utilizing the correspondences between
the scan and the map to analyze the alignment strength against the principal
directions of the optimization as part of its fine-grained LiDAR localizability
analysis. In the second part, this localizability analysis is then integrated
into the scan-to-map point cloud registration to generate drift-free pose
updates by enforcing controlled updates or leaving the degenerate directions of
the optimization unchanged. The proposed method is thoroughly evaluated and
compared to state-of-the-art methods in simulated and real-world experiments,
demonstrating the performance and reliability improvement in LiDAR-challenging
environments. In all experiments, the proposed framework demonstrates accurate
and generalizable localizability detection and robust pose estimation without
environment-specific parameter tuning.Comment: 20 Pages, 20 Figures Submitted to IEEE Transactions On Robotics.
Supplementary Video: https://youtu.be/SviLl7q69aA Project Website:
https://sites.google.com/leggedrobotics.com/x-ic
Present and Future of SLAM in Extreme Underground Environments
This paper reports on the state of the art in underground SLAM by discussing
different SLAM strategies and results across six teams that participated in the
three-year-long SubT competition. In particular, the paper has four main goals.
First, we review the algorithms, architectures, and systems adopted by the
teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to
approach for virtually all teams in the competition), heterogeneous multi-robot
operation (including both aerial and ground robots), and real-world underground
operation (from the presence of obscurants to the need to handle tight
computational constraints). We do not shy away from discussing the dirty
details behind the different SubT SLAM systems, which are often omitted from
technical papers. Second, we discuss the maturity of the field by highlighting
what is possible with the current SLAM systems and what we believe is within
reach with some good systems engineering. Third, we outline what we believe are
fundamental open problems, that are likely to require further research to break
through. Finally, we provide a list of open-source SLAM implementations and
datasets that have been produced during the SubT challenge and related efforts,
and constitute a useful resource for researchers and practitioners.Comment: 21 pages including references. This survey paper is submitted to IEEE
Transactions on Robotics for pre-approva