48 research outputs found
Human-robot spatial interaction using probabilistic qualitative representations
Current human-aware navigation approaches use a predominantly metric representation
of the interaction which makes them susceptible to changes in the environment. In order
to accomplish reliable navigation in ever-changing human populated environments, the
presented work aims to abstract from the underlying metric representation by using Qualitative
Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for
Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been
used to analyse different types of interactions online. This work extends this representation
to be able to classify the interaction type online using incrementally updated QTC
state chains, create a belief about the state of the world, and transform this high-level
descriptor into low-level movement commands. By using QSRs the system becomes invariant
to change in the environment, which is essential for any form of long-term deployment
of a robot, but most importantly also allows the transfer of knowledge between similar
encounters in different environments to facilitate interaction learning. To create a robust
qualitative representation of the interaction, the essence of the movement of the human in
relation to the robot and vice-versa is encoded in two new variants of QTC especially designed
for HRSI and evaluated in several user studies. To enable interaction learning and
facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov
Models (HMMs) for online classiffication and evaluation of their appropriateness for the
task of human-aware navigation.
In order to create a system for an autonomous robot, a perception pipeline for the
detection and tracking of humans in the vicinity of the robot is described which serves
as an enabling technology to create incrementally updated QTC state chains in real-time
using the robot's sensors. Using this framework, the abstraction and generalisability of the
QTC based framework is tested by using data from a different study for the classiffication
of automatically generated state chains which shows the benefits of using such a highlevel
description language. The detriment of using qualitative states to encode interaction
is the severe loss of information that would be necessary to generate behaviour from it.
To overcome this issue, so-called Velocity Costmaps are introduced which restrict the
sampling space of a reactive local planner to only allow the generation of trajectories
that correspond to the desired QTC state. This results in a
exible and agile behaviour
I
generation that is able to produce inherently safe paths. In order to classify the current
interaction type online and predict the current state for action selection, the HMMs are
evolved into a particle filter especially designed to work with QSRs of any kind. This
online belief generation is the basis for a
exible action selection process that is based on
data acquired using Learning from Demonstration (LfD) to encode human judgement into
the used model. Thereby, the generated behaviour is not only sociable but also legible
and ensures a high experienced comfort as shown in the experiments conducted. LfD
itself is a rather underused approach when it comes to human-aware navigation but is
facilitated by the qualitative model and allows exploitation of expert knowledge for model
generation. Hence, the presented work bridges the gap between the speed and
exibility
of a sampling based reactive approach by using the particle filter and fast action selection,
and the legibility of deliberative planners by using high-level information based on expert
knowledge about the unfolding of an interaction
Contextualized Robot Navigation
In order to improve the interaction between humans and robots, robots need to be able to move about in a way that is appropriate to the complex environments around them. One way to investigate how the robots should move is through the lens of theatre, which provides us with ways to analyze the robot\u27s movements and the motivations for moving in particular ways. In particular, this has proven useful for improving robot navigation. By altering the costmaps used for path planning, robots can navigate around their environment in ways that incorporate additional contexts. Experimental results with user studies have shown altered costmaps to have a significant effect on the interaction, although the costmaps must be carefully tuned to get the desired effect. The new layered costmap algorithm builds on the established open-source navigation platform, creating a robust system that can be extended to handle a wide range of contextual situations
Intention prediction for interactive navigation in distributed robotic systems
Modern applications of mobile robots require them to have the ability to safely and
effectively navigate in human environments. New challenges arise when these
robots must plan their motion in a human-aware fashion. Current methods
addressing this problem have focused mainly on the activity forecasting aspect,
aiming at improving predictions without considering the active nature of the
interaction, i.e. the robot’s effect on the environment and consequent issues such as
reciprocity. Furthermore, many methods rely on computationally expensive offline
training of predictive models that may not be well suited to rapidly evolving
dynamic environments.
This thesis presents a novel approach for enabling autonomous robots to navigate
socially in environments with humans. Following formulations of the inverse
planning problem, agents reason about the intentions of other agents and make
predictions about their future interactive motion. A technique is proposed to
implement counterfactual reasoning over a parametrised set of light-weight
reciprocal motion models, thus making it more tractable to maintain beliefs over the
future trajectories of other agents towards plausible goals. The speed of inference
and the effectiveness of the algorithms is demonstrated via physical robot
experiments, where computationally constrained robots navigate amongst humans
in a distributed multi-sensor setup, able to infer other agents’ intentions as fast as
100ms after the first observation.
While intention inference is a key aspect of successful human-robot interaction,
executing any task requires planning that takes into account the predicted goals and
trajectories of other agents, e.g., pedestrians. It is well known that robots
demonstrate unwanted behaviours, such as freezing or becoming sluggishly
responsive, when placed in dynamic and cluttered environments, due to the way in
which safety margins according to simple heuristics end up covering the entire
feasible space of motion. The presented approach makes more refined predictions
about future movement, which enables robots to find collision-free paths quickly
and efficiently.
This thesis describes a novel technique for generating "interactive costmaps", a
representation of the planner’s costs and rewards across time and space, providing
an autonomous robot with the information required to navigate socially given the
estimate of other agents’ intentions. This multi-layered costmap deters the robot from
obstructing while encouraging social navigation respectful of other agents’ activity.
Results show that this approach minimises collisions and near-collisions, minimises
travel times for agents, and importantly offers the same computational cost as the
most common costmap alternatives for navigation.
A key part of the practical deployment of such technologies is their ease of
implementation and configuration. Since every use case and environment is
different and distinct, the presented methods use online adaptation to learn
parameters of the navigating agents during runtime. Furthermore, this thesis
includes a novel technique for allocating tasks in distributed robotics systems,
where a tool is provided to maximise the performance on any distributed setup by
automatic parameter tuning. All of these methods are implemented in ROS and
distributed as open-source. The ultimate aim is to provide an accessible and efficient
framework that may be seamlessly deployed on modern robots, enabling
widespread use of intention prediction for interactive navigation in distributed
robotic systems
EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy
Traversing terrain with good traction is crucial for achieving fast off-road
navigation. Instead of manually designing costs based on terrain features,
existing methods learn terrain properties directly from data via
self-supervision, but challenges remain to properly quantify and mitigate risks
due to uncertainties in learned models. This work efficiently quantifies both
aleatoric and epistemic uncertainties by learning discrete traction
distributions and probability densities of the traction predictor's latent
features. Leveraging evidential deep learning, we parameterize Dirichlet
distributions with the network outputs and propose a novel uncertainty-aware
squared Earth Mover's distance loss with a closed-form expression that improves
learning accuracy and navigation performance. The proposed risk-aware planner
simulates state trajectories with the worst-case expected traction to handle
aleatoric uncertainty, and penalizes trajectories moving through terrain with
high epistemic uncertainty. Our approach is extensively validated in simulation
and on wheeled and quadruped robots, showing improved navigation performance
compared to methods that assume no slip, assume the expected traction, or
optimize for the worst-case expected cost.Comment: Under review. Journal extension for arXiv:2210.00153. Project
website: https://xiaoyi-cai.github.io/evora
The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots
Safe yet efficient operation of professional service robots within logistics or production in human-robot shared environments requires a flexible human-aware navigation stack. In this manuscript, we propose the ILIAD safety stack comprising software and hardware designed to achieve safe and efficient motion specifically for industrial vehicles with nontrivial kinematics The stack integrates five interconnected layers for autonomous motion planning and control to enable short- and long-term reasoning. The use-case scenario tested requires an autonomous industrial forklift to safely navigate among pick-and-place locations during normal daily activities involving human workers. Our test-bed in the real world consists of a three-day experiment in a food distribution warehouse. The evaluation is extended in simulation with an ablation study of the impact of different layers to show both the practical and the performance-related impact. The experimental results show a safer and more legible robot when humans are nearby with a trade-off in task efficiency, and that not all layers have the same degree of impact in the system
Towards Safer Robot Motion: Using a Qualitative Motion Model to Classify Human-Robot Spatial Interaction
For adoption of Autonomous Mobile Robots (AMR) across a breadth of industries, they must navigate around humans in a way which is safe and which humans perceive as safe, but without greatly compromising efficiency. This work aims to classify the Human-Robot Spatial Interaction (HRSI) situation of an interacting human and robot, to be applied in Human-Aware Navigation (HAN) to account for situational context. We develop qualitative probabilistic models of relative human and robot movements in various HRSI situations to classify situations, and explain our plan to develop per-situation probabilistic models of socially legible HRSI to predict human and robot movement. In future work we aim to use these predictions to generate qualitative constraints in the form of metric cost-maps for local robot motion planners, enforcing more efficient and socially legible trajectories which are both physically safe and perceived as safe
Safe Human-Robot Interaction in Agriculture
Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application