2 research outputs found

    A causal-based approach to explain, predict and prevent failures in robotic tasks

    Get PDF
    Robots working in human environments need to adapt to unexpected changes to avoid failures. This is an open and complex challenge that requires robots to timely predict and identify the causes of failures in order to prevent them. In this paper, we present a causal-based method that will enable robots to predict when errors are likely to occur and prevent them from happening by executing a corrective action. Our proposed method is able to predict immediate failures and also failures that will occur in the future. The latter type of failure is very challenging, and we call them timely-shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). First, our method detects the cause–effect relationships between task executions and their consequences by learning a causal Bayesian network (BN). The obtained model is transferred from simulated data to real scenarios to demonstrate the robustness and generalization of the obtained models. Based on the causal BN, the robot can predict if and why the executed action will succeed or not in its current state. Then, we introduce a novel method that finds the closest success state through a contrastive Breadth-First-Search if the current action was predicted to fail. We evaluate our approach for the problem of stacking cubes in two cases; (a) single stacks (stacking one cube) and; (b) multiple stacks (stacking three cubes). In the single-stack case, our method was able to reduce the error rate by 97%. We also show that our approach can scale to capture various actions in one model, allowing us to measure the impact of an imprecise stack of the first cube on the stacking success of the third cube. For these complex situations, our model was able to prevent around 95% of the stacking errors. Thus, demonstrating that our method is able to explain, predict, and prevent execution failures, which even scales to complex scenarios that require an understanding of how the action history impacts future actions

    Explainable and Interpretable Decision-Making for Robotic Tasks

    Get PDF
    Future generations of robots, such as service robots that support humans with household tasks, will be a pervasive part of our daily lives. The human\u27s ability to understand the decision-making process of robots is thereby considered to be crucial for establishing trust-based and efficient interactions between humans and robots. In this thesis, we present several interpretable and explainable decision-making methods that aim to improve the human\u27s understanding of a robot\u27s actions, with a particular focus on the explanation of why robot failures were committed.In this thesis, we consider different types of failures, such as task recognition errors and task execution failures. Our first goal is an interpretable approach to learning from human demonstrations (LfD), which is essential for robots to learn new tasks without the time-consuming trial-and-error learning process. Our proposed method deals with the challenge of transferring human demonstrations to robots by an automated generation of symbolic planning operators based on interpretable decision trees. Our second goal is the prediction, explanation, and prevention of robot task execution failures based on causal models of the environment. Our contribution towards the second goal is a causal-based method that finds contrastive explanations for robot execution failures, which enables robots to predict, explain and prevent even timely shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). Since learning causal models is data-intensive, our final goal is to improve the data efficiency by utilizing prior experience. This investigation aims to help robots learn causal models faster, enabling them to provide failure explanations at the cost of fewer action execution experiments.In the future, we will work on scaling up the presented methods to generalize to more complex, human-centered applications
    corecore