812 research outputs found
Counterfactual Reasoning about Intent for Interactive Navigation in Dynamic Environments
Many modern robotics applications require robots to function autonomously in
dynamic environments including other decision making agents, such as people or
other robots. This calls for fast and scalable interactive motion planning.
This requires models that take into consideration the other agent's intended
actions in one's own planning. We present a real-time motion planning framework
that brings together a few key components including intention inference by
reasoning counterfactually about potential motion of the other agents as they
work towards different goals. By using a light-weight motion model, we achieve
efficient iterative planning for fluid motion when avoiding pedestrians, in
parallel with goal inference for longer range movement prediction. This
inference framework is coupled with a novel distributed visual tracking method
that provides reliable and robust models for the current belief-state of the
monitored environment. This combined approach represents a computationally
efficient alternative to previously studied policy learning methods that often
require significant offline training or calibration and do not yet scale to
densely populated environments. We validate this framework with experiments
involving multi-robot and human-robot navigation. We further validate the
tracker component separately on much larger scale unconstrained pedestrian data
sets
Intention prediction for interactive navigation in distributed robotic systems
Modern applications of mobile robots require them to have the ability to safely and
effectively navigate in human environments. New challenges arise when these
robots must plan their motion in a human-aware fashion. Current methods
addressing this problem have focused mainly on the activity forecasting aspect,
aiming at improving predictions without considering the active nature of the
interaction, i.e. the robot’s effect on the environment and consequent issues such as
reciprocity. Furthermore, many methods rely on computationally expensive offline
training of predictive models that may not be well suited to rapidly evolving
dynamic environments.
This thesis presents a novel approach for enabling autonomous robots to navigate
socially in environments with humans. Following formulations of the inverse
planning problem, agents reason about the intentions of other agents and make
predictions about their future interactive motion. A technique is proposed to
implement counterfactual reasoning over a parametrised set of light-weight
reciprocal motion models, thus making it more tractable to maintain beliefs over the
future trajectories of other agents towards plausible goals. The speed of inference
and the effectiveness of the algorithms is demonstrated via physical robot
experiments, where computationally constrained robots navigate amongst humans
in a distributed multi-sensor setup, able to infer other agents’ intentions as fast as
100ms after the first observation.
While intention inference is a key aspect of successful human-robot interaction,
executing any task requires planning that takes into account the predicted goals and
trajectories of other agents, e.g., pedestrians. It is well known that robots
demonstrate unwanted behaviours, such as freezing or becoming sluggishly
responsive, when placed in dynamic and cluttered environments, due to the way in
which safety margins according to simple heuristics end up covering the entire
feasible space of motion. The presented approach makes more refined predictions
about future movement, which enables robots to find collision-free paths quickly
and efficiently.
This thesis describes a novel technique for generating "interactive costmaps", a
representation of the planner’s costs and rewards across time and space, providing
an autonomous robot with the information required to navigate socially given the
estimate of other agents’ intentions. This multi-layered costmap deters the robot from
obstructing while encouraging social navigation respectful of other agents’ activity.
Results show that this approach minimises collisions and near-collisions, minimises
travel times for agents, and importantly offers the same computational cost as the
most common costmap alternatives for navigation.
A key part of the practical deployment of such technologies is their ease of
implementation and configuration. Since every use case and environment is
different and distinct, the presented methods use online adaptation to learn
parameters of the navigating agents during runtime. Furthermore, this thesis
includes a novel technique for allocating tasks in distributed robotics systems,
where a tool is provided to maximise the performance on any distributed setup by
automatic parameter tuning. All of these methods are implemented in ROS and
distributed as open-source. The ultimate aim is to provide an accessible and efficient
framework that may be seamlessly deployed on modern robots, enabling
widespread use of intention prediction for interactive navigation in distributed
robotic systems
Value Propagation Networks
We present Value Propagation (VProp), a set of parameter-efficient
differentiable planning modules built on Value Iteration which can successfully
be trained using reinforcement learning to solve unseen tasks, has the
capability to generalize to larger map sizes, and can learn to navigate in
dynamic environments. We show that the modules enable learning to plan when the
environment also includes stochastic elements, providing a cost-efficient
learning system to build low-level size-invariant planners for a variety of
interactive navigation problems. We evaluate on static and dynamic
configurations of MazeBase grid-worlds, with randomly generated environments of
several different sizes, and on a StarCraft navigation scenario, with more
complex dynamics, and pixels as input.Comment: Updated to match ICLR 2019 OpenReview's versio
Providing and assessing intelligible explanations in autonomous driving
Intelligent vehicles with automated driving functionalities provide many benefits, but also instigate serious concerns around human safety and trust. While the automotive industry has devoted enormous resources to realising vehicle autonomy, there exist uncertainties as to whether the technology would be widely adopted by society. Autonomous vehicles (AVs) are complex systems, and in challenging driving scenarios, they are likely to make decisions that could be confusing to end-users. As a way to bridge the gap between this technology and end-users, the provision of explanations is generally being put forward. While explanations are considered to be helpful, this thesis argues that explanations must also be intelligible (as obligated by the GDPR Article 12) to the intended stakeholders, and should make causal attributions in order to foster confidence and trust in end-users. Moreover, the methods for generating these explanations should be transparent for easy audit. To substantiate this argument, the thesis proceeds in four steps: First, we adopted a mixed method approach (in a user study ) to elicit passengers' requirements for effective explainability in diverse autonomous driving scenarios. Second, we explored different representations, data structures and driving data annotation schemes to facilitate intelligible explanation generation and general explainability research in autonomous driving. Third, we developed transparent algorithms for posthoc explanation generation. These algorithms were tested within a collision risk assessment case study and an AV navigation case study, using the Lyft Level5 dataset and our new SAX dataset---a dataset that we have introduced for AV explainability research. Fourth, we deployed these algorithms in an immersive physical simulation environment and assessed (in a lab study ) the impact of the generated explanations on passengers' perceived safety while varying the prediction accuracy of an AV's perception system and the specificity of the explanations. The thesis concludes by providing recommendations needed for the realisation of more effective explainable autonomous driving, and provides a future research agenda
Assisting Designers in the Anticipation of Future Product Use
En ligne sur le site de l'éditeur : http://www.aijstpme.kmutnb.ac.th/index.php?option=com_jdownloads&view=viewcategories&Itemid=1In this paper, we present some theories sover past decades describing interactions between designers and users, and a state of the art of methods and tools to support these interactions in user-centred design. We discuss related methodological issues as a first step toward the introduction of new methods to assist usercentred design, to avoid uses of the product which might have undesirable consequences, while leaving margins allowing users to adapt to the situation and potentially introduce further innovations within the product. Lastly, we discuss the concept of unforeseen use and introduce creativity methods to help designers anticipate these uses
Preliminary Recommendations for the Collection, Storage, and Analysis of UAS Safety Data
Although the use of UASs in military and public service operations is proliferating, civilian use of UASs remains limited in the United States today. With efforts underway to accommodate and integrate UASs into the NAS, a proactive understanding of safety issues, i.e., the unique hazards and the corresponding risks that UASs pose not only through their operations for commercial purposes, but also to existing operations in the NAS, is especially important so as to (a) support the development of a sound regulatory basis, (b) regulate, design and properly equip UASs, and (c) effectively mitigate the risks posed. Data, especially about system and component failures, incidents, and accidents, provides valuable insight into how performance and operational capabilities/limitations contribute to hazards. Since the majority of UAS operations today take place in a context that is significantly different from the norm in civil aviation, i.e., with different operational goals and standards, identifying that which constitutes useful and sufficient data on UASs and their operations is a substantial research challenge
EXPLICIT RULE LEARNING: A COGNITIVE TUTORIAL METHOD TO TRAIN USERS OF ARTIFICIAL INTELLIGENCE/MACHINE LEARNING SYSTEMS
Today’s intelligent software systems, such as Artificial Intelligence/Machine Learning systems, are sophisticated, complicated, sometimes complex systems. In order to effectively interact with these systems, novice users need to have a certain level of understanding. An awareness of a system’s underlying principles, rationale, logic, and goals can enhance the synergistic human-machine interaction. It also benefits the user to know when they can trust the systems’ output, and to discern boundary conditions that might change the output. The purpose of this research is to empirically test the viability of a Cognitive Tutorial approach, called Explicit Rule Learning. Several approaches have been used to train humans in intelligent software systems; one of them is exemplar-based training. Although there has been some success, depending on the structure of the system, there are limitations to exemplars, which oftentimes are post hoc and case-based. Explicit Rule Learning is a global and rule-based training method that incorporates exemplars, but goes beyond specific cases. It provides learners with rich, robust mental models and the ability to transfer the learned skills to novel, previously unencountered situations. Learners are given verbalizable, probabilistic if...then statements, supplemented with exemplars. This is followed up with a series of practice problems, to which learners respond and receive immediate feedback on their correctness. The expectation is that this method will result in a refined representation of the system’s underlying principles, and a richer and more robust mental model that will enable the learner to simulate future states. Preliminary research helped to evaluate and refine Explicit Rule Learning. The final study in this research applied Explicit Rule Learning to a more real-world system, autonomous driving. The mixed-method within-subject study used a more naturalistic environment. Participants were given training material using the Explicit Rule Learning method and were subsequently tested on their ability to predict the autonomous vehicle’s actions. The results indicate that the participants trained with the Explicit Rule Learning method were more proficient at predicting the autonomous vehicle’s actions. These results, together with the results of preceding studies indicate that Explicit Rule Learning is an effective method to accelerate the proficiency of learners of intelligent software systems. Explicit Rule Learning is a low-cost training intervention that can be adapted to many intelligent software systems, including the many types of AI/ML systems in today’s world
Taxonomy of Trust-Relevant Failures and Mitigation Strategies
We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxonomy of potential trust violations and suitable repair strategies to support researchers during the development of interaction scenarios. The taxonomy distinguishes four failure types: Design, System, Expectation, and User failures and outlines potential mitigation strategies. Based on these failures, strategies for autonomous failure detection and repair are presented, employing explanation, verification and validation techniques. Finally, a research agenda for HRI is outlined, discussing identified gaps related to the relation of failures and HR-trust
Solving the Task Variant Allocation Problem in Distributed Robotics
We consider the problem of assigning software processes (or tasks) to hardware processors in distributed robotics environments. We introduce the notion of a task variant, which supports the adaptation of software to specific hardware configurations. Task variants facilitate the trade-off of functional quality versus the requisite capacity and type of target execution processors. We formalise the problem of assigning task variants to processors as a mathematical model that incorporates typical constraints found in robotics applications; the model is a constrained form of a multi-objective, multi-dimensional, multiple-choice knapsack problem. We propose and evaluate three different solution methods to the problem: constraint programming, a constructive greedy heuristic and a local search metaheuristic. Furthermore, we demonstrate the use of task variants in a real instance of a distributed interactive multi-agent navigation system, showing that our best solution method (constraint programming) improves the system’s quality of service, as compared to the local search metaheuristic, the greedy heuristic and a randomised solution, by an average of 16, 31 and 56% respectively
- …