3,273 research outputs found

    Narrative based Postdictive Reasoning for Cognitive Robotics

    Full text link
    Making sense of incomplete and conflicting narrative knowledge in the presence of abnormalities, unobservable processes, and other real world considerations is a challenge and crucial requirement for cognitive robotics systems. An added challenge, even when suitably specialised action languages and reasoning systems exist, is practical integration and application within large-scale robot control frameworks. In the backdrop of an autonomous wheelchair robot control task, we report on application-driven work to realise postdiction triggered abnormality detection and re-planning for real-time robot control: (a) Narrative-based knowledge about the environment is obtained via a larger smart environment framework; and (b) abnormalities are postdicted from stable-models of an answer-set program corresponding to the robot's epistemic model. The overall reasoning is performed in the context of an approximate epistemic action theory based planner implemented via a translation to answer-set programming.Comment: Commonsense Reasoning Symposium, Ayia Napa, Cyprus, 201

    Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

    Get PDF
    Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents. However, planning involving nested beliefs is known to be computationally challenging. In this work, we address the task of synthesizing plans that necessitate reasoning about the beliefs of other agents. We plan from the perspective of a single agent with the potential for goals and actions that involve nested beliefs, non-homogeneous agents, co-present observations, and the ability for one agent to reason as if it were another. We formally characterize our notion of planning with nested belief, and subsequently demonstrate how to automatically convert such problems into problems that appeal to classical planning technology for solving efficiently. Our approach represents an important step towards applying the well-established field of automated planning to the challenging task of planning involving nested beliefs of multiple agents

    Design of a solver for multi-agent epistemic planning

    Get PDF
    As the interest in Artificial Intelligence continues to grow it is becoming more and more important to investigate formalization and tools that allow us to exploit logic to reason about the world. In particular, given the increasing number of multi-agents systems that could benefit from techniques of automated reasoning, exploring new ways to define not only the world's status but also the agents' information is constantly growing in importance. This type of reasoning, i.e., about agents' perception of the world and also about agents' knowledge of her and others' knowledge, is referred to as epistemic reasoning. In our work we will try to formalize this concept, expressed through epistemic logic, for dynamic domains. In particular we will attempt to define a new action-based language for multi-agent epistemic planning and to implement an epistemic planner based on it. This solver should provide a tool flexible enough to be able to reason on different domains, e.g., economy, security, justice and politics, where reasoning about others' beliefs could lead to winning strategies or help in changing a group of agents' view of the world.Comment: In Proceedings ICLP 2019, arXiv:1909.07646. arXiv admin note: text overlap with arXiv:1511.01960 by other author

    Postdictive Reasoning in Epistemic Action Theory

    Get PDF
    If an agent executes an action, this will not only change the world physically, but also the agent's knowledge about the world. Therefore the occurrence of an action can be modeled as an epistemic state transition which maps the knowledge state of an agent to a successor knowledge state. For example, consider that an agent in a state s_0 executes an action a. This causes a transition to a state s_1. Subsequently, the agent executes a sensing action a_s, which produces knowledge and causes a transition to a state s_2. With the information which is gained by the sensation, the agent can not only extend its knowledge about s_2, but also infer additional knowledge about the initial state s_0. That is, the agent uses knowledge about the present to retrospectively acquire additional information about the past. We refer to this temporal form of epistemic inference as postdiction. Existing action theories are not capable of efficiently performing postdictive reasoning because they require an exponential number of state variables to represent an agent's knowledge state. The contribution of this thesis is an approximate epistemic action theory which is capable of postdictive reasoning while it requires only a linear number of state variables to represent an agent's knowledge state. In addition, the theory is able to perform a more general temporal form of postdiction, which most existing approaches do not support. We call the theory the h-approximation (HPX) because it explicitly represents historical knowledge about past world states. In addition to the operational semantics of HPX, we present its formalization in terms of Answer Set Programming (ASP) and provide respective soundness results. The ASP implementation allows us to apply HPX in real robotic applications by using off-the-shelf ASP solvers. Specifically, we integrate of HPX in an online planning framework for Cognitive Robotics where planning, plan execution and abductive explanation tasks are interleaved. As a proof-of-concept, we provide a case-study which demonstrates the application of HPX for high-level robot control in a Smart Home. The case-study emphasizes the usefulness of postdiction for abnormality detection in robotics: actions which are performed by robots are often not successful due to unforeseen practical problems. A solution is to verify action success by observing the effects of the action. If the desired effects do not hold after action execution, then one can postdict the existence of an abnormality

    Cognitive Science of Religion, Atheism, and Theism

    Get PDF

    Robot task planning and explanation in open and uncertain worlds

    Get PDF
    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization

    Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning

    Full text link
    We introduce a simple but effective method for managing risk in model-based reinforcement learning with trajectory sampling that involves probabilistic safety constraints and balancing of optimism in the face of epistemic uncertainty and pessimism in the face of aleatoric uncertainty of an ensemble of stochastic neural networks.Various experiments indicate that the separation of uncertainties is essential to performing well with data-driven MPC approaches in uncertain and safety-critical control environments
    corecore