1,439 research outputs found
EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy
Traversing terrain with good traction is crucial for achieving fast off-road
navigation. Instead of manually designing costs based on terrain features,
existing methods learn terrain properties directly from data via
self-supervision, but challenges remain to properly quantify and mitigate risks
due to uncertainties in learned models. This work efficiently quantifies both
aleatoric and epistemic uncertainties by learning discrete traction
distributions and probability densities of the traction predictor's latent
features. Leveraging evidential deep learning, we parameterize Dirichlet
distributions with the network outputs and propose a novel uncertainty-aware
squared Earth Mover's distance loss with a closed-form expression that improves
learning accuracy and navigation performance. The proposed risk-aware planner
simulates state trajectories with the worst-case expected traction to handle
aleatoric uncertainty, and penalizes trajectories moving through terrain with
high epistemic uncertainty. Our approach is extensively validated in simulation
and on wheeled and quadruped robots, showing improved navigation performance
compared to methods that assume no slip, assume the expected traction, or
optimize for the worst-case expected cost.Comment: Under review. Journal extension for arXiv:2210.00153. Project
website: https://xiaoyi-cai.github.io/evora
Safe Robot Planning and Control Using Uncertainty-Aware Deep Learning
In order for robots to autonomously operate in novel environments over extended periods of time, they must learn and adapt to changes in the dynamics of their motion and the environment. Neural networks have been shown to be a versatile and powerful tool for learning dynamics and semantic information. However, there is reluctance to deploy these methods on safety-critical or high-risk applications, since neural networks tend to be black-box function approximators. Therefore, there is a need for investigation into how these machine learning methods can be safely leveraged for learning-based controls, planning, and traversability. The aim of this thesis is to explore methods for both establishing safety guarantees as well as accurately quantifying risks when using deep neural networks for robot planning, especially in high-risk environments. First, we consider uncertainty-aware Bayesian Neural Networks for adaptive control, and introduce a method for guaranteeing safety under certain assumptions. Second, we investigate deep quantile regression learning methods for learning time-and-state varying uncertainties, which we use to perform trajectory optimization with Model Predictive Control. Third, we introduce a complete framework for risk-aware traversability and planning, which we use to enable safe exploration of extreme environments. Fourth, we again leverage deep quantile regression and establish a method for accurately learning the distribution of traversability risks in these environments, which can be used to create safety constraints for planning and control.Ph.D
Hybrid Imitative Planning with Geometric and Predictive Costs in Off-road Environments
Geometric methods for solving open-world off-road navigation tasks, by
learning occupancy and metric maps, provide good generalization but can be
brittle in outdoor environments that violate their assumptions (e.g., tall
grass). Learning-based methods can directly learn collision-free behavior from
raw observations, but are difficult to integrate with standard geometry-based
pipelines. This creates an unfortunate conflict -- either use learning and lose
out on well-understood geometric navigational components, or do not use it, in
favor of extensively hand-tuned geometry-based cost maps. In this work, we
reject this dichotomy by designing the learning and non-learning-based
components in a way such that they can be effectively combined in a
self-supervised manner. Both components contribute to a planning criterion: the
learned component contributes predicted traversability as rewards, while the
geometric component contributes obstacle cost information. We instantiate and
comparatively evaluate our system in both in-distribution and
out-of-distribution environments, showing that this approach inherits
complementary gains from the learned and geometric components and significantly
outperforms either of them. Videos of our results are hosted at
https://sites.google.com/view/hybrid-imitative-plannin
Learning Agility and Adaptive Legged Locomotion via Curricular Hindsight Reinforcement Learning
Agile and adaptive maneuvers such as fall recovery, high-speed turning, and
sprinting in the wild are challenging for legged systems. We propose a
Curricular Hindsight Reinforcement Learning (CHRL) that learns an end-to-end
tracking controller that achieves powerful agility and adaptation for the
legged robot. The two key components are (I) a novel automatic curriculum
strategy on task difficulty and (ii) a Hindsight Experience Replay strategy
adapted to legged locomotion tasks. We demonstrated successful agile and
adaptive locomotion on a real quadruped robot that performed fall recovery
autonomously, coherent trotting, sustained outdoor speeds up to 3.45 m/s, and
tuning speeds up to 3.2 rad/s. This system produces adaptive behaviours
responding to changing situations and unexpected disturbances on natural
terrains like grass and dirt
Recommended from our members
Belief-Space Planning for Resourceful Manipulation and Mobility
Robots are increasingly expected to work in partially observable and unstructured environments. They need to select actions that exploit perceptual and motor resourcefulness to manage uncertainty based on the demands of the task and environment. The research in this dissertation makes two primary contributions. First, it develops a new concept in resourceful robot platforms called the UMass uBot and introduces the sixth and seventh in the uBot series. uBot-6 introduces multiple postural configurations that enable different modes of mobility and manipulation to meet the needs of a wide variety of tasks and environmental constraints. uBot-7 extends this with the use of series elastic actuators (SEAs) to improve manipulation capabilities and support safer operation around humans. The resourcefulness of these robots is complemented with a belief-space planning framework that enables task-driven action selection in the context of the partially observable environment. The framework uses a compact but expressive state representation based on object models. We extend an existing affordance-based object model, called an aspect transition graph (ATG), with geometric information. This enables object-centric modeling of features and actions, making the model much more expressive without increasing the complexity. A novel task representation enables the belief-space planner to perform general object-centric tasks ranging from recognition to manipulation of objects. The approach supports the efficient handling of multi-object scenes. The combination of the physical platform and the planning framework are evaluated in two novel, challenging, partially observable planning domains. The ARcube domain provides a large population of objects that are highly ambiguous. Objects can only be differentiated using multi-modal sensor information and manual interactions. In the dexterous mobility domain, a robot can employ multiple mobility modes to complete navigation tasks under a variety of possible environment constraints. The performance of the proposed approach is evaluated using experiments in simulation and on a real robot
Adaptive Localization and Mapping for Planetary Rovers
Future rovers will be equipped with substantial onboard autonomy as space agencies and industry proceed with missions studies and technology development in preparation for the next planetary exploration missions. Simultaneous Localization and Mapping (SLAM) is a fundamental part of autonomous capabilities and has close connections to robot perception, planning and control. SLAM positively affects rover operations and mission success. The SLAM community has made great progress in the last decade by enabling real world solutions in terrestrial applications and is nowadays addressing important challenges in robust performance, scalability, high-level understanding, resources awareness and domain adaptation. In this thesis, an adaptive SLAM system is proposed in order to improve rover navigation performance and demand. This research presents a novel localization and mapping solution following a bottom-up approach. It starts with an Attitude and Heading Reference System (AHRS), continues with a 3D odometry dead reckoning solution and builds up to a full graph optimization scheme which uses visual odometry and takes into account rover traction performance, bringing scalability to modern SLAM solutions. A design procedure is presented in order to incorporate inertial sensors into the AHRS. The procedure follows three steps: error characterization, model derivation and filter design. A complete kinematics model of the rover locomotion subsystem is developed in order to improve the wheel odometry solution. Consequently, the parametric model predicts delta poses by solving a system of equations with weighed least squares. In addition, an odometry error model is learned using Gaussian processes (GPs) in order to predict non-systematic errors induced by poor traction of the rover with the terrain. The odometry error model complements the parametric solution by adding an estimation of the error. The gained information serves to adapt the localization and mapping solution to the current navigation demands (domain adaptation). The adaptivity strategy is designed to adjust the visual odometry computational load (active perception) and to influence the optimization back-end by including highly informative keyframes in the graph (adaptive information gain). Following this strategy, the solution is adapted to the navigation demands, providing an adaptive SLAM system driven by the navigation performance and conditions of the interaction with the terrain. The proposed methodology is experimentally verified on a representative planetary rover under realistic field test scenarios. This thesis introduces a modern SLAM system which adapts the estimated pose and map to the predicted error. The system maintains accuracy with fewer nodes, taking the best of both wheel and visual methods in a consistent graph-based smoothing approach
How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle Traversability
Estimating terrain traversability in off-road environments requires reasoning
about complex interaction dynamics between the robot and these terrains.
However, it is challenging to build an accurate physics model, or create
informative labels to learn a model in a supervised manner, for these
interactions. We propose a method that learns to predict traversability
costmaps by combining exteroceptive environmental information with
proprioceptive terrain interaction feedback in a self-supervised manner.
Additionally, we propose a novel way of incorporating robot velocity in the
costmap prediction pipeline. We validate our method in multiple short and
large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV)
on challenging off-road terrains, and demonstrate ease of integration on a
separate large ground robot. Our short-scale navigation results show that using
our learned costmaps leads to overall smoother navigation, and provides the
robot with a more fine-grained understanding of the interactions between the
robot and different terrain types, such as grass and gravel. Our large-scale
navigation trials show that we can reduce the number of interventions by up to
57% compared to an occupancy-based navigation baseline in challenging off-road
courses ranging from 400 m to 3150 m
- …