99 research outputs found
Real-Time Navigation for Bipedal Robots in Dynamic Environments
The popularity of mobile robots has been steadily growing, with these robots
being increasingly utilized to execute tasks previously completed by human
workers. For bipedal robots to see this same success, robust autonomous
navigation systems need to be developed that can execute in real-time and
respond to dynamic environments. These systems can be divided into three
stages: perception, planning, and control. A holistic navigation framework for
bipedal robots must successfully integrate all three components of the
autonomous navigation problem to enable robust real-world navigation. In this
paper, we present a real-time navigation framework for bipedal robots in
dynamic environments. The proposed system addresses all components of the
navigation problem: We introduce a depth-based perception system for obstacle
detection, mapping, and localization. A two-stage planner is developed to
generate collision-free trajectories robust to unknown and dynamic
environments. And execute trajectories on the Digit bipedal robot's walking
gait controller. The navigation framework is validated through a series of
simulation and hardware experiments that contain unknown environments and
dynamic obstacles.Comment: Submitted to 2023 IEEE International Conference on Robotics and
Automation (ICRA). For associated experiment recordings see
https://www.youtube.com/watch?v=WzHejHx-Kz
Contextualized Robot Navigation
In order to improve the interaction between humans and robots, robots need to be able to move about in a way that is appropriate to the complex environments around them. One way to investigate how the robots should move is through the lens of theatre, which provides us with ways to analyze the robot\u27s movements and the motivations for moving in particular ways. In particular, this has proven useful for improving robot navigation. By altering the costmaps used for path planning, robots can navigate around their environment in ways that incorporate additional contexts. Experimental results with user studies have shown altered costmaps to have a significant effect on the interaction, although the costmaps must be carefully tuned to get the desired effect. The new layered costmap algorithm builds on the established open-source navigation platform, creating a robust system that can be extended to handle a wide range of contextual situations
Human-robot spatial interaction using probabilistic qualitative representations
Current human-aware navigation approaches use a predominantly metric representation
of the interaction which makes them susceptible to changes in the environment. In order
to accomplish reliable navigation in ever-changing human populated environments, the
presented work aims to abstract from the underlying metric representation by using Qualitative
Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for
Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been
used to analyse different types of interactions online. This work extends this representation
to be able to classify the interaction type online using incrementally updated QTC
state chains, create a belief about the state of the world, and transform this high-level
descriptor into low-level movement commands. By using QSRs the system becomes invariant
to change in the environment, which is essential for any form of long-term deployment
of a robot, but most importantly also allows the transfer of knowledge between similar
encounters in different environments to facilitate interaction learning. To create a robust
qualitative representation of the interaction, the essence of the movement of the human in
relation to the robot and vice-versa is encoded in two new variants of QTC especially designed
for HRSI and evaluated in several user studies. To enable interaction learning and
facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov
Models (HMMs) for online classiffication and evaluation of their appropriateness for the
task of human-aware navigation.
In order to create a system for an autonomous robot, a perception pipeline for the
detection and tracking of humans in the vicinity of the robot is described which serves
as an enabling technology to create incrementally updated QTC state chains in real-time
using the robot's sensors. Using this framework, the abstraction and generalisability of the
QTC based framework is tested by using data from a different study for the classiffication
of automatically generated state chains which shows the benefits of using such a highlevel
description language. The detriment of using qualitative states to encode interaction
is the severe loss of information that would be necessary to generate behaviour from it.
To overcome this issue, so-called Velocity Costmaps are introduced which restrict the
sampling space of a reactive local planner to only allow the generation of trajectories
that correspond to the desired QTC state. This results in a
exible and agile behaviour
I
generation that is able to produce inherently safe paths. In order to classify the current
interaction type online and predict the current state for action selection, the HMMs are
evolved into a particle filter especially designed to work with QSRs of any kind. This
online belief generation is the basis for a
exible action selection process that is based on
data acquired using Learning from Demonstration (LfD) to encode human judgement into
the used model. Thereby, the generated behaviour is not only sociable but also legible
and ensures a high experienced comfort as shown in the experiments conducted. LfD
itself is a rather underused approach when it comes to human-aware navigation but is
facilitated by the qualitative model and allows exploitation of expert knowledge for model
generation. Hence, the presented work bridges the gap between the speed and
exibility
of a sampling based reactive approach by using the particle filter and fast action selection,
and the legibility of deliberative planners by using high-level information based on expert
knowledge about the unfolding of an interaction
TerrainNet: Visual Modeling of Complex Terrain for High-speed, Off-road Navigation
Effective use of camera-based vision systems is essential for robust
performance in autonomous off-road driving, particularly in the high-speed
regime. Despite success in structured, on-road settings, current end-to-end
approaches for scene prediction have yet to be successfully adapted for complex
outdoor terrain. To this end, we present TerrainNet, a vision-based terrain
perception system for semantic and geometric terrain prediction for aggressive,
off-road navigation. The approach relies on several key insights and practical
considerations for achieving reliable terrain modeling. The network includes a
multi-headed output representation to capture fine- and coarse-grained terrain
features necessary for estimating traversability. Accurate depth estimation is
achieved using self-supervised depth completion with multi-view RGB and stereo
inputs. Requirements for real-time performance and fast inference speeds are
met using efficient, learned image feature projections. Furthermore, the model
is trained on a large-scale, real-world off-road dataset collected across a
variety of diverse outdoor environments. We show how TerrainNet can also be
used for costmap prediction and provide a detailed framework for integration
into a planning module. We demonstrate the performance of TerrainNet through
extensive comparison to current state-of-the-art baselines for camera-only
scene prediction. Finally, we showcase the effectiveness of integrating
TerrainNet within a complete autonomous-driving stack by conducting a
real-world vehicle test in a challenging off-road scenario
EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy
Traversing terrain with good traction is crucial for achieving fast off-road
navigation. Instead of manually designing costs based on terrain features,
existing methods learn terrain properties directly from data via
self-supervision, but challenges remain to properly quantify and mitigate risks
due to uncertainties in learned models. This work efficiently quantifies both
aleatoric and epistemic uncertainties by learning discrete traction
distributions and probability densities of the traction predictor's latent
features. Leveraging evidential deep learning, we parameterize Dirichlet
distributions with the network outputs and propose a novel uncertainty-aware
squared Earth Mover's distance loss with a closed-form expression that improves
learning accuracy and navigation performance. The proposed risk-aware planner
simulates state trajectories with the worst-case expected traction to handle
aleatoric uncertainty, and penalizes trajectories moving through terrain with
high epistemic uncertainty. Our approach is extensively validated in simulation
and on wheeled and quadruped robots, showing improved navigation performance
compared to methods that assume no slip, assume the expected traction, or
optimize for the worst-case expected cost.Comment: Under review. Journal extension for arXiv:2210.00153. Project
website: https://xiaoyi-cai.github.io/evora
- …