63,506 research outputs found

    Why did I fail? A Causal-based Method to Find Explanations for Robot Failures

    Get PDF
    Robot failures in human-centered environments are inevitable. Therefore, the ability of robots to explain such failures is paramount for interacting with humans to increase trust and transparency. To achieve this skill, the main challenges addressed in this paper are I) acquiring enough data to learn a cause-effect model of the environment and II) generating causal explanations based on that model. We address I) by learning a causal Bayesian network from simulation data. Concerning II), we propose a novel method that enables robots to generate contrastive explanations upon task failures. The explanation is based on setting the failure state in contrast with the closest state that would have allowed for successful execution, which is found through breadth-first search and is based on success predictions from the learned causal model. We assess the sim2real transferability of the causal model on a cube stacking scenario. Based on real-world experiments with two differently embodied robots, we achieve a sim2real accuracy of 70% without any adaptation or retraining. Our method thus allowed real robots to give failure explanations like, 'the upper cube was dropped too high and too far to the right of the lower cube.'Comment: submitted to IEEE Robotics and Automation Letters (February 2022

    Why Did I Fail? a Causal-Based Method to Find Explanations for Robot Failures

    Get PDF
    Robot failures in human-centered environments are inevitable. Therefore, the ability of robots to explain such failures is paramount for interacting with humans to increase trust and transparency. To achieve this skill, the main challenges addressed in this paper are I) acquiring enough data to learn a cause-effect model of the environment and II) generating causal explanations based on the obtained model. We address I) by learning a causal Bayesian network from simulation data. Concerning II), we propose a novel method that enables robots to generate contrastive explanations upon task failures. The explanation is based on setting the failure state in contrast with the closest state that would have allowed for successful execution. This state is found through breadth-first search and is based on success predictions from the learned causal model. We assessed our method in two different scenarios I) stacking cubes and II) dropping spheres into a container. The obtained causal models reach a sim2real accuracy of 70% and 72%, respectively. We finally show that our novel method scales over multiple tasks and allows real robots to give failure explanations like “the upper cube was stacked too high and too far to the right of the lower cube.

    Knowledge-based vision and simple visual machines

    Get PDF
    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong

    Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps

    Get PDF
    With advances in reinforcement learning (RL), agents are now being developed in high-stakes application domains such as healthcare and transportation. Explaining the behavior of these agents is challenging, as the environments in which they act have large state spaces, and their decision-making can be affected by delayed rewards, making it difficult to analyze their behavior. To address this problem, several approaches have been developed. Some approaches attempt to convey the global\textit{global} behavior of the agent, describing the actions it takes in different states. Other approaches devised local\textit{local} explanations which provide information regarding the agent's decision-making in a particular state. In this paper, we combine global and local explanation methods, and evaluate their joint and separate contributions, providing (to the best of our knowledge) the first user study of combined local and global explanations for RL agents. Specifically, we augment strategy summaries that extract important trajectories of states from simulations of the agent with saliency maps which show what information the agent attends to. Our results show that the choice of what states to include in the summary (global information) strongly affects people's understanding of agents: participants shown summaries that included important states significantly outperformed participants who were presented with agent behavior in a randomly set of chosen world-states. We find mixed results with respect to augmenting demonstrations with saliency maps (local information), as the addition of saliency maps did not significantly improve performance in most cases. However, we do find some evidence that saliency maps can help users better understand what information the agent relies on in its decision making, suggesting avenues for future work that can further improve explanations of RL agents

    Exploring a New ExpAce: The Complementarities between Experimental Economics and Agent-based Computational Economics

    Get PDF
    What is the relationship, if any, between Experimental Economics and Agent-based Computational Economics? Experimental Economics (EXP) investigates individual behaviour (and the emergence of aggregate regularities) by means of human subject experiments. Agent-based Computational Economics (ACE), on the other hand, studies the relationships between the micro and the macro level with the aid of artificial experiments. Note that the way ACE makes use of experiments to formulate theories is indeed similar to the way EXP does. The question we want to address is whether they can complement and integrate with each other. What can Agent-based computational Economics give to, and take from, Experimental Economics? Can they help and sustain each other, and ultimately gain space out of their restricted respective niches of practitioners? We believe that the answer to all these questions is yes: there can be and there should be profitable “contaminations” in both directions, of which we provide a first comprehensive discussion.Experimental Economics, Agent-based Computational Economics, Agent-Based Models, Simulation.

    Support of the collaborative inquiry learning process: influence of support on task and team regulation

    Get PDF
    Regulation of the learning process is an important condition for efficient and effective learning. In collaborative learning, students have to regulate their collaborative activities (team regulation) next to the regulation of their own learning process focused on the task at hand (task regulation). In this study, we investigate how support of collaborative inquiry learning can influence the use of regulative activities of students. Furthermore, we explore the possible relations between task regulation, team regulation and learning results. This study involves tenth-grade students who worked in pairs in a collaborative inquiry learning environment that was based on a computer simulation, Collisions, developed in the program SimQuest. Students of the same team worked on two different computers and communicated through chat. Chat logs of students from three different conditions are compared. Students in the first condition did not receive any support at all (Control condition). In the second condition, students received an instruction in effective communication, the RIDE rules (RIDE condition). In the third condition, students were, in addition to receiving the RIDE rules instruction, supported by the Collaborative Hypothesis Tool (CHT), which helped the students with formulating hypotheses together (CHT condition). The results show that students overall used more team regulation than task regulation. In the RIDE condition and the CHT condition, students regulated their team activities most often. Moreover, in the CHT condition the regulation of team activities was positively related to the learning results. We can conclude that different measures of support can enhance the use of team regulative activities, which in turn can lead to better learning results
    • …
    corecore