552 research outputs found

    Towards explainable AI: directed inference of linear temporal logic constraints

    Get PDF
    Many systems in robotics and beyond may be classified as mixed logical-dynamical (MLD) systems. These systems are subject to both logical constraints, which govern their safe operation and goals; and dynamical constraints, which describe their physical behavior. These time-dependent constraints can be described with linear temporal logic (LTL). In the case where the constraints are not known, their inference offers a type of explanation for their behavior. Previous work has attempted to infer constraints for MLD systems by Bayesian methods, searching for optimally contrastive rules between "good" and "bad" system runs. However, due to a reliance on an unknown prior distribution, as well as a limited search space, these efforts are unable to recover all desired constraints. We propose an alternative inference method called directed hypothesis space generation (DHSG). DHSG compares each system run and constructs a full hypothesis space of all conjunctions and disjunctions of the desired LTL formula types. In simulation, DHSG recovered a full hypothesis space for each test case. However, due to a comparatively high computational demand, it also exhibited run times which increased significantly with state space complexity. The computational load was lightened by limiting the length of inferred formulas, at the cost of hypothesis space completeness. However, the adjustable computation time of the Bayesian approach means that it retains an advantage under some use cases. Finally, for scenarios in which neither the LTL rules are known, nor the state-space regions they govern, DHSG has potential to construct the unknown regions. This approach would give a basis on which to perform further inference. Region construction would apply to lesser-understood systems and presents a topic for future work.Ope

    Why Did I Fail? a Causal-Based Method to Find Explanations for Robot Failures

    Get PDF
    Robot failures in human-centered environments are inevitable. Therefore, the ability of robots to explain such failures is paramount for interacting with humans to increase trust and transparency. To achieve this skill, the main challenges addressed in this paper are I) acquiring enough data to learn a cause-effect model of the environment and II) generating causal explanations based on the obtained model. We address I) by learning a causal Bayesian network from simulation data. Concerning II), we propose a novel method that enables robots to generate contrastive explanations upon task failures. The explanation is based on setting the failure state in contrast with the closest state that would have allowed for successful execution. This state is found through breadth-first search and is based on success predictions from the learned causal model. We assessed our method in two different scenarios I) stacking cubes and II) dropping spheres into a container. The obtained causal models reach a sim2real accuracy of 70% and 72%, respectively. We finally show that our novel method scales over multiple tasks and allows real robots to give failure explanations like “the upper cube was stacked too high and too far to the right of the lower cube.

    Explainable and Interpretable Decision-Making for Robotic Tasks

    Get PDF
    Future generations of robots, such as service robots that support humans with household tasks, will be a pervasive part of our daily lives. The human\u27s ability to understand the decision-making process of robots is thereby considered to be crucial for establishing trust-based and efficient interactions between humans and robots. In this thesis, we present several interpretable and explainable decision-making methods that aim to improve the human\u27s understanding of a robot\u27s actions, with a particular focus on the explanation of why robot failures were committed.In this thesis, we consider different types of failures, such as task recognition errors and task execution failures. Our first goal is an interpretable approach to learning from human demonstrations (LfD), which is essential for robots to learn new tasks without the time-consuming trial-and-error learning process. Our proposed method deals with the challenge of transferring human demonstrations to robots by an automated generation of symbolic planning operators based on interpretable decision trees. Our second goal is the prediction, explanation, and prevention of robot task execution failures based on causal models of the environment. Our contribution towards the second goal is a causal-based method that finds contrastive explanations for robot execution failures, which enables robots to predict, explain and prevent even timely shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). Since learning causal models is data-intensive, our final goal is to improve the data efficiency by utilizing prior experience. This investigation aims to help robots learn causal models faster, enabling them to provide failure explanations at the cost of fewer action execution experiments.In the future, we will work on scaling up the presented methods to generalize to more complex, human-centered applications

    Grounding Complex Natural Language Commands for Temporal Tasks in Unseen Environments

    Full text link
    Grounding navigational commands to linear temporal logic (LTL) leverages its unambiguous semantics for reasoning about long-horizon tasks and verifying the satisfaction of temporal constraints. Existing approaches require training data from the specific environment and landmarks that will be used in natural language to understand commands in those environments. We propose Lang2LTL, a modular system and a software package that leverages large language models (LLMs) to ground temporal navigational commands to LTL specifications in environments without prior language data. We comprehensively evaluate Lang2LTL for five well-defined generalization behaviors. Lang2LTL demonstrates the state-of-the-art ability of a single model to ground navigational commands to diverse temporal specifications in 21 city-scaled environments. Finally, we demonstrate a physical robot using Lang2LTL can follow 52 semantically diverse navigational commands in two indoor environments.Comment: Conference on Robot Learning 202
    • …
    corecore