2,774 research outputs found

    Improving Automated Driving through Planning with Human Internal States

    Full text link
    This work examines the hypothesis that partially observable Markov decision process (POMDP) planning with human driver internal states can significantly improve both safety and efficiency in autonomous freeway driving. We evaluate this hypothesis in a simulated scenario where an autonomous car must safely perform three lane changes in rapid succession. Approximate POMDP solutions are obtained through the partially observable Monte Carlo planning with observation widening (POMCPOW) algorithm. This approach outperforms over-confident and conservative MDP baselines and matches or outperforms QMDP. Relative to the MDP baselines, POMCPOW typically cuts the rate of unsafe situations in half or increases the success rate by 50%.Comment: Preprint before submission to IEEE Transactions on Intelligent Transportation Systems. arXiv admin note: text overlap with arXiv:1702.0085

    Narrative based Postdictive Reasoning for Cognitive Robotics

    Full text link
    Making sense of incomplete and conflicting narrative knowledge in the presence of abnormalities, unobservable processes, and other real world considerations is a challenge and crucial requirement for cognitive robotics systems. An added challenge, even when suitably specialised action languages and reasoning systems exist, is practical integration and application within large-scale robot control frameworks. In the backdrop of an autonomous wheelchair robot control task, we report on application-driven work to realise postdiction triggered abnormality detection and re-planning for real-time robot control: (a) Narrative-based knowledge about the environment is obtained via a larger smart environment framework; and (b) abnormalities are postdicted from stable-models of an answer-set program corresponding to the robot's epistemic model. The overall reasoning is performed in the context of an approximate epistemic action theory based planner implemented via a translation to answer-set programming.Comment: Commonsense Reasoning Symposium, Ayia Napa, Cyprus, 201

    ECHO: A hierarchical combination of classical and multi-agent epistemic planning problems

    Get PDF
    The continuous interest in Artificial Intelligence (AI) has brought, among other things, the development of several scenarios where multiple artificial entities interact with each other. As for all the other autonomous settings, these multi-agent systems require orchestration. This is, generally, achieved through techniques derived from the vast field of Automated Planning. Notably, arbitration in multi-agent domains is not only tasked with regulating how the agents act, but must also consider the interactions between the agents' information flows and must, therefore, reason on an epistemic level. This brings a substantial overhead that often diminishes the reasoning process's usability in real-world situations. To address this problem, we present ECHO, a hierarchical framework that embeds classical and multi-agent epistemic (epistemic, for brevity) planners in a single architecture. The idea is to combine (i) classical; and(ii) epistemic solvers to model efficiently the agents' interactions with the (i) 'physical world'; and(ii) information flows, respectively. In particular, the presented architecture starts by planning on the 'epistemic level', with a high level of abstraction, focusing only on the information flows. Then it refines the planning process, due to the classical planner, to fully characterize the interactions with the 'physical' world. To further optimize the solving process, we introduced the concept of macros in epistemic planning and enriched the 'classical' part of the domain with goal-networks. Finally, we evaluated our approach in an actual robotic environment showing that our architecture indeed reduces the overall computational time

    Design of a solver for multi-agent epistemic planning

    Get PDF
    As the interest in Artificial Intelligence continues to grow it is becoming more and more important to investigate formalization and tools that allow us to exploit logic to reason about the world. In particular, given the increasing number of multi-agents systems that could benefit from techniques of automated reasoning, exploring new ways to define not only the world's status but also the agents' information is constantly growing in importance. This type of reasoning, i.e., about agents' perception of the world and also about agents' knowledge of her and others' knowledge, is referred to as epistemic reasoning. In our work we will try to formalize this concept, expressed through epistemic logic, for dynamic domains. In particular we will attempt to define a new action-based language for multi-agent epistemic planning and to implement an epistemic planner based on it. This solver should provide a tool flexible enough to be able to reason on different domains, e.g., economy, security, justice and politics, where reasoning about others' beliefs could lead to winning strategies or help in changing a group of agents' view of the world.Comment: In Proceedings ICLP 2019, arXiv:1909.07646. arXiv admin note: text overlap with arXiv:1511.01960 by other author

    Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

    Get PDF
    Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents. However, planning involving nested beliefs is known to be computationally challenging. In this work, we address the task of synthesizing plans that necessitate reasoning about the beliefs of other agents. We plan from the perspective of a single agent with the potential for goals and actions that involve nested beliefs, non-homogeneous agents, co-present observations, and the ability for one agent to reason as if it were another. We formally characterize our notion of planning with nested belief, and subsequently demonstrate how to automatically convert such problems into problems that appeal to classical planning technology for solving efficiently. Our approach represents an important step towards applying the well-established field of automated planning to the challenging task of planning involving nested beliefs of multiple agents

    Planning while Believing to Know

    Get PDF
    Over the last few years, the concept of Artificial Intelligence (AI) has become essential in our daily life and in several working scenarios. Among the various branches of AI, automated planning and the study of multi-agent systems are central research fields. This thesis focuses on a combination of these two areas: that is, a specialized kind of planning known as Multi-agent Epistemic Planning. This field of research is concentrated on all those scenarios where agents, reasoning in the space of knowledge/beliefs, try to find a plan to reach a desirable state from a starting one. This requires agents able to reason about her/his and others’ knowledge/beliefs and, therefore, capable of performing epistemic reasoning. Being aware of the information flows and the others’ states of mind is, in fact, a key aspect in several planning situations. That is why developing autonomous agents, that can reason considering the perspectives of their peers, is paramount to model a variety of real-world domains. The objective of our work is to formalize an environment where a complete characterization of the agents’ knowledge/beliefs interactions and updates are possible. In particular, we achieved such a goal by defining a new action-based language for Multi-agent Epistemic Planning and implementing epistemic planners based on it. These solvers, flexible enough to reason about various domains and different nuances of knowledge/belief update, can provide a solid base for further research on epistemic reasoning or real-base applications. This dissertation also proposes the design of a more general epistemic planning architecture. This architecture, following famous cognitive theories, tries to emulate some characteristics of the human decision-making process. In particular, we envisioned a system composed of several solving processes, each one with its own trade-off between efficiency and correctness, which are arbitrated by a meta-cognitive module

    Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning

    Full text link
    In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model. We show how this formulation allows agents to not only leverage existing strategies for handling model differences but can also exhibit novel behaviors that are generated through the combination of these different strategies. Our formulation also reveals a deep connection to existing approaches in epistemic planning. Specifically, we show how we can leverage classical planning compilations for epistemic planning to solve Expectation-Aware planning problems. To the best of our knowledge, the proposed formulation is the first complete solution to decision-making in the presence of diverging user expectations that is amenable to a classical planning compilation while successfully combining previous works on explanation and explicability. We empirically show how our approach provides a computational advantage over existing approximate approaches that unnecessarily try to search in the space of models while also failing to facilitate the full gamut of behaviors enabled by our framework

    A Gentle Introduction to Epistemic Planning: The DEL Approach

    Get PDF
    Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.Comment: In Proceedings M4M9 2017, arXiv:1703.0173
    • …
    corecore