72 research outputs found
Reinforcement and Curriculum Learning for Off-Road Navigation of an UGV with a 3D LiDAR
This paper presents the use of deep Reinforcement Learning (RL) for autonomous navigation
of an Unmanned Ground Vehicle (UGV) with an onboard three-dimensional (3D) Light Detection
and Ranging (LiDAR) sensor in off-road environments. For training, both the robotic simulator
Gazebo and the Curriculum Learning paradigm are applied. Furthermore, an Actor–Critic Neural
Network (NN) scheme is chosen with a suitable state and a custom reward function. To employ the
3D LiDAR data as part of the input state of the NNs, a virtual two-dimensional (2D) traversability
scanner is developed. The resulting Actor NN has been successfully tested in both real and simulated
experiments and favorably compared with a previous reactive navigation approach on the same UGV.Partial funding for open access charge: Universidad de Málag
A comprehensive survey of unmanned ground vehicle terrain traversability for unstructured environments and sensor technology insights
This article provides a detailed analysis of the assessment of unmanned ground vehicle terrain traversability. The analysis is categorized into terrain classification, terrain mapping, and cost-based traversability, with subcategories of appearance-based, geometry-based, and mixed-based methods. The article also explores the use of machine learning (ML), deep learning (DL) and reinforcement learning (RL) and other based end-to-end methods as crucial components for advanced terrain traversability analysis. The investigation indicates that a mixed approach, incorporating both exteroceptive and proprioceptive sensors, is more effective, optimized, and reliable for traversability analysis. Additionally, the article discusses the vehicle platforms and sensor technologies used in traversability analysis, making it a valuable resource for researchers in the field. Overall, this paper contributes significantly to the current understanding of traversability analysis in unstructured environments and provides insights for future sensor-based research on advanced traversability analysis
Learning Terrain-Aware Kinodynamic Model for Autonomous Off-Road Rally Driving With Model Predictive Path Integral Control
High-speed autonomous driving in off-road environments has immense potential
for various applications, but it also presents challenges due to the complexity
of vehicle-terrain interactions. In such environments, it is crucial for the
vehicle to predict its motion and adjust its controls proactively in response
to environmental changes, such as variations in terrain elevation. To this end,
we propose a method for learning terrain-aware kinodynamic model which is
conditioned on both proprioceptive and exteroceptive information. The proposed
model generates reliable predictions of 6-degree-of-freedom motion and can even
estimate contact interactions without requiring ground truth force data during
training. This enables the design of a safe and robust model predictive
controller through appropriate cost function design which penalizes sampled
trajectories with unstable motion, unsafe interactions, and high levels of
uncertainty derived from the model. We demonstrate the effectiveness of our
approach through experiments on a simulated off-road track, showing that our
proposed model-controller pair outperforms the baseline and ensures robust
high-speed driving performance without control failure.Comment: Accepted to IEEE Robotics and Automation Letters (and ICRA 2024). Our
video can be found at https://youtu.be/VXf_prNQnJo Project page :
https://sites.google.com/view/terrainawarekinody
Context-Conditional Navigation with a Learning-Based Terrain- and Robot-Aware Dynamics Model
In autonomous navigation settings, several quantities can be subject to
variations. Terrain properties such as friction coefficients may vary over time
depending on the location of the robot. Also, the dynamics of the robot may
change due to, e.g., different payloads, changing the system's mass, or wear
and tear, changing actuator gains or joint friction. An autonomous agent should
thus be able to adapt to such variations. In this paper, we develop a novel
probabilistic, terrain- and robot-aware forward dynamics model, termed TRADYN,
which is able to adapt to the above-mentioned variations. It builds on recent
advances in meta-learning forward dynamics models based on Neural Processes. We
evaluate our method in a simulated 2D navigation setting with a unicycle-like
robot and different terrain layouts with spatially varying friction
coefficients. In our experiments, the proposed model exhibits lower prediction
error for the task of long-horizon trajectory prediction, compared to
non-adaptive ablation models. We also evaluate our model on the downstream task
of navigation planning, which demonstrates improved performance in planning
control-efficient paths by taking robot and terrain properties into account.Comment: \copyright 2023 IEEE. Accepted for publication in European Conference
on Mobile Robots (ECMR), 2023. Updated copyright statemen
Safe Robot Planning and Control Using Uncertainty-Aware Deep Learning
In order for robots to autonomously operate in novel environments over extended periods of time, they must learn and adapt to changes in the dynamics of their motion and the environment. Neural networks have been shown to be a versatile and powerful tool for learning dynamics and semantic information. However, there is reluctance to deploy these methods on safety-critical or high-risk applications, since neural networks tend to be black-box function approximators. Therefore, there is a need for investigation into how these machine learning methods can be safely leveraged for learning-based controls, planning, and traversability. The aim of this thesis is to explore methods for both establishing safety guarantees as well as accurately quantifying risks when using deep neural networks for robot planning, especially in high-risk environments. First, we consider uncertainty-aware Bayesian Neural Networks for adaptive control, and introduce a method for guaranteeing safety under certain assumptions. Second, we investigate deep quantile regression learning methods for learning time-and-state varying uncertainties, which we use to perform trajectory optimization with Model Predictive Control. Third, we introduce a complete framework for risk-aware traversability and planning, which we use to enable safe exploration of extreme environments. Fourth, we again leverage deep quantile regression and establish a method for accurately learning the distribution of traversability risks in these environments, which can be used to create safety constraints for planning and control.Ph.D
CAR-Net: Clairvoyant Attentive Recurrent Network
We present an interpretable framework for path prediction that leverages
dependencies between agents' behaviors and their spatial navigation
environment. We exploit two sources of information: the past motion trajectory
of the agent of interest and a wide top-view image of the navigation scene. We
propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns where
to look in a large image of the scene when solving the path prediction task.
Our method can attend to any area, or combination of areas, within the raw
image (e.g., road intersections) when predicting the trajectory of the agent.
This allows us to visualize fine-grained semantic elements of navigation scenes
that influence the prediction of trajectories. To study the impact of space on
agents' trajectories, we build a new dataset made of top-view images of
hundreds of scenes (Formula One racing tracks) where agents' behaviors are
heavily influenced by known areas in the images (e.g., upcoming turns). CAR-Net
successfully attends to these salient regions. Additionally, CAR-Net reaches
state-of-the-art accuracy on the standard trajectory forecasting benchmark,
Stanford Drone Dataset (SDD). Finally, we show CAR-Net's ability to generalize
to unseen scenes.Comment: The 2nd and 3rd authors contributed equall
- …