19 research outputs found

    CAR-DESPOT: causally-informed online POMDP planning for robots in confounded environments

    Get PDF
    Robots operating in real-world environments must reason about possible outcomes of stochastic actions and make decisions based on partial observations of the true world state. A major challenge for making accurate and robust action predictions is the problem of confounding, which if left untreated can lead to prediction errors. The partially observable Markov decision process (POMDP) is a widely-used framework to model these stochastic and partially-observable decision-making problems. However, due to a lack of explicit causal semantics, POMDP planning methods are prone to confounding bias and thus in the presence of unobserved confounders may produce underperforming policies. This paper presents a novel causally-informed extension of “anytime regularized determinized sparse partially observable tree” (AR-DESPOT), a modern anytime online POMDP planner, using causal modelling and inference to eliminate errors caused by unmeasured confounder variables. We further propose a method to learn offline the partial parameterisation of the causal model for planning, from ground truth model data. We evaluate our methods on a toy problem with an unobserved confounder and show that the learned causal model is highly accurate, while our planning method is more robust to confounding and produces overall higher performing policies than AR-DESPOT

    CAR-DESPOT: Causally-Informed Online POMDP Planning for Robots in Confounded Environments

    Full text link
    Robots operating in real-world environments must reason about possible outcomes of stochastic actions and make decisions based on partial observations of the true world state. A major challenge for making accurate and robust action predictions is the problem of confounding, which if left untreated can lead to prediction errors. The partially observable Markov decision process (POMDP) is a widely-used framework to model these stochastic and partially-observable decision-making problems. However, due to a lack of explicit causal semantics, POMDP planning methods are prone to confounding bias and thus in the presence of unobserved confounders may produce underperforming policies. This paper presents a novel causally-informed extension of "anytime regularized determinized sparse partially observable tree" (AR-DESPOT), a modern anytime online POMDP planner, using causal modelling and inference to eliminate errors caused by unmeasured confounder variables. We further propose a method to learn offline the partial parameterisation of the causal model for planning, from ground truth model data. We evaluate our methods on a toy problem with an unobserved confounder and show that the learned causal model is highly accurate, while our planning method is more robust to confounding and produces overall higher performing policies than AR-DESPOT.Comment: 8 pages, 3 figures, submitted to 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Towards a causal probabilistic framework for prediction, action-selection & explanations for robot block-stacking tasks

    Get PDF
    Uncertainties in the real world mean that is impossible for system designers to anticipate and explicitly design for all scenarios that a robot might encounter. Thus, robots designed like this are fragile and fail outside of highly-controlled environments. Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot's interaction with its environment, in addition to probabilistic representations of noise and uncertainty typically encountered by real-world robots. Combined with causal inference, these models permit an autonomous agent to understand, reason about, and explain its environment. In this work, we focus on the problem of a robot block-stacking task due to the fundamental perception and manipulation capabilities it demonstrates, required by many applications including warehouse logistics and domestic human support robotics. We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block-stacking task, reason about the next-best action from placement candidates, and generate post-hoc counterfactual explanations. We provide exemplar next-best action selection results and outline planned experimentation in simulated and real-world robot block-stacking tasks

    Towards a causal probabilistic framework for prediction, action-selection & explanations for robot block-stacking tasks

    Get PDF
    Uncertainties in the real world mean that is impossible for system designers to anticipate and explicitly design for all scenarios that a robot might encounter. Thus, robots designed like this are fragile and fail outside of highly-controlled environments. Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot’s interaction with its environment, in addition to probabilistic representations of noise and uncertainty typically encountered by real-world robots. Combined with causal inference, these models permit an autonomous agent to understand, reason about, and explain its environment. In this work, we focus on the problem of a robot block-stacking task due to the fundamental perception and manipulation capabilities it demonstrates, required by many applications including warehouse logistics and domestic human support robotics. We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block stacking task, reason about the next-best action from placement candidates, and generate post-hoc counterfactual explanations. We provide exemplar next-best action selection results and outline planned experimentation in simulated and real-world robot block-stacking tasks

    Towards a Causal Probabilistic Framework for Prediction, Action-Selection & Explanations for Robot Block-Stacking Tasks

    Full text link
    Uncertainties in the real world mean that is impossible for system designers to anticipate and explicitly design for all scenarios that a robot might encounter. Thus, robots designed like this are fragile and fail outside of highly-controlled environments. Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot's interaction with its environment, in addition to probabilistic representations of noise and uncertainty typically encountered by real-world robots. Combined with causal inference, these models permit an autonomous agent to understand, reason about, and explain its environment. In this work, we focus on the problem of a robot block-stacking task due to the fundamental perception and manipulation capabilities it demonstrates, required by many applications including warehouse logistics and domestic human support robotics. We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block-stacking task, reason about the next-best action from placement candidates, and generate post-hoc counterfactual explanations. We provide exemplar next-best action selection results and outline planned experimentation in simulated and real-world robot block-stacking tasks.Comment: 4 pages, 4 figures, camera-ready manuscript, accepted to the "Causality for Robotics: Answering the Question of Why" workshop at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Towards Probabilistic Causal Discovery, Inference & Explanations for Autonomous Drones in Mine Surveying Tasks

    Full text link
    Causal modelling offers great potential to provide autonomous agents the ability to understand the data-generation process that governs their interactions with the world. Such models capture formal knowledge as well as probabilistic representations of noise and uncertainty typically encountered by autonomous robots in real-world environments. Thus, causality can aid autonomous agents in making decisions and explaining outcomes, but deploying causality in such a manner introduces new challenges. Here we identify challenges relating to causality in the context of a drone system operating in a salt mine. Such environments are challenging for autonomous agents because of the presence of confounders, non-stationarity, and a difficulty in building complete causal models ahead of time. To address these issues, we propose a probabilistic causal framework consisting of: causally-informed POMDP planning, online SCM adaptation, and post-hoc counterfactual explanations. Further, we outline planned experimentation to evaluate the framework integrated with a drone system in simulated mine environments and on a real-world mine dataset.Comment: 3 Pages, 1 Figure, To be published in the Proceedings of the "Causality for Robotics: Answering the Question of Why" workshop at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Adjusted initial submission versio

    Towards probabilistic causal discovery, inference & explanations for autonomous drones in mine surveying tasks

    Get PDF
    Causal modelling offers great potential to provide autonomous agents the ability to understand the data-generation process that governs their interactions with the world. Such models capture formal knowledge as well as probabilistic representations of noise and uncertainty typically encountered by autonomous robots in real-world environments. Thus, causality can aid autonomous agents in making decisions and explaining outcomes, but deploying causality in such a manner introduces new challenges. Here we identify challenges relating to causality in the context of a drone system operating in a salt mine. Such environments are challenging for autonomous agents because of the presence of confounders, non-stationarity, and a difficulty in building complete causal models ahead of time. To address these issues, we propose a probabilistic causal framework consisting of: causally-informed POMDP planning, online SCM adaptation, and post-hoc counterfactual explanations. Further, we outline planned experimentation to evaluate the framework integrated with a drone system in simulated mine environments and on a real-world mine dataset

    From ABC to KPZ

    Full text link
    We study the equilibrium fluctuations of an interacting particle system evolving on the discrete ring with three species of particles that we name A,BA,B and CC, but at each site there is only one particle. We prove that proper choices of density fluctuation fields (that match of those from nonlinear fluctuating hydrodynamics theory) associated to the conserved quantities converge, in the limit NN\to\infty, to a system of stochastic partial differential equations, that can either be the Ornstein-Uhlenbeck equation or the Stochastic Burgers' equation.Comment: 41 page

    Aloft: Self-Adaptive Drone Controller Testbed

    Get PDF
    Aerial drones are increasingly being considered as a valuable tool for inspection in safety critical contexts. Nowhere is this more true than in mining operations which present a dynamic and dangerous environment for human operators. Drones can be deployed in a number of contexts including efficient surveying as well as search and rescue missions. Operating in these dynamic contexts is challenging however and requires the drones control software to detect and adapt to conditions at run-time. To help in the development of such systems we present Aloft, a simulation supported testbed for investigating self-adaptive controllers for drones in mines. Aloft, utilises the Robot Operating system (ROS) and a model environment using Gazebo to provide a physics-based testing. The simulation environment is constructed from a 3D point cloud collected in a physical mock-up of a mine and contains features expected to be found in real-world contexts. Aloft allows members of the research community to deploy their own self-adaptive controllers into the control loop of the drone to evaluate the effectiveness and robustness of controllers in a challenging environment. To demonstrate our system we provide a self-adaptive drone controller and operating scenario as an exemplar. The self-adaptive drone controller provided, utilises a two-layered architecture with a MAPE-K feedback loop. The scenario is an inspection task during which we inject a communications failure. The aim of the controller is to detect this loss of communication and autonomously perform a return home behaviour. Limited battery life presents a constraint on the mission, which therefore means that the drone should complete its mission as fast as possible. Humans, however, might also be present within the environment. This poses a safety risk and the drone must be able to avoid collisions during autonomous flight. In this paper we describe the controller framework and the simulation environment and provide information on how a user might construct and evaluate their own controllers in the presence of disruptions at run-time
    corecore