6,711 research outputs found

    Cooperative Epistemic Multi-Agent Planning for Implicit Coordination

    Get PDF
    Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Recently, Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. We extend the DEL-based epistemic planning framework to include perspective shifts, allowing us to define new notions of sequential and conditional planning with implicit coordination. With these, it is possible to solve planning tasks with joint goals in a decentralized manner without the agents having to negotiate about and commit to a joint policy at plan time. First we define the central planning notions and sketch the implementation of a planning system built on those notions. Afterwards we provide some case studies in order to evaluate the planner empirically and to show that the concept is useful for multi-agent systems in practice.Comment: In Proceedings M4M9 2017, arXiv:1703.0173

    A Gentle Introduction to Epistemic Planning: The DEL Approach

    Get PDF
    Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.Comment: In Proceedings M4M9 2017, arXiv:1703.0173

    The Hanabi Challenge: A New Frontier for AI Research

    Full text link
    From the early days of computing, games have been important testbeds for studying how well machines can do sophisticated decision making. In recent years, machine learning has made dramatic advances with artificial agents reaching superhuman performance in challenge domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners. We continue this tradition by proposing the game of Hanabi as a new challenge domain with novel problems that arise from its combination of purely cooperative gameplay with two to five players and imperfect information. In particular, we argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground. We believe developing novel techniques for such theory of mind reasoning will not only be crucial for success in Hanabi, but also in broader collaborative efforts, especially those with human partners. To facilitate future research, we introduce the open-source Hanabi Learning Environment, propose an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence

    Adaptation and Communication in Human-Robot Teaming to Handle Discrepancies in Agents' Beliefs about Plans

    Full text link
    When agents collaborate on a task, it is important that they have some shared mental model of the task routines -- the set of feasible plans towards achieving the goals. However, in reality, situations often arise that such a shared mental model cannot be guaranteed, such as in ad-hoc teams where agents may follow different conventions or when contingent constraints arise that only some agents are aware of. Previous work on human-robot teaming has assumed that the team has a set of shared routines, which breaks down in these situations. In this work, we leverage epistemic logic to enable agents to understand the discrepancy in each other's beliefs about feasible plans and dynamically plan their actions to adapt or communicate to resolve the discrepancy. We propose a formalism that extends conditional doxastic logic to describe knowledge bases in order to explicitly represent agents' nested beliefs on the feasible plans and state of execution. We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action, including communication actions to explain the feasibility of plans, announce intent, and ask questions. Finally, we evaluate the success rate and scalability of the algorithm and show that our agent is better equipped to work in teams without the guarantee of a shared mental model.Comment: 10 pages, Published at ICAPS 2023 (Main Track

    Going one step further: towards cognitively enhanced problem-solving teaming agents

    Get PDF
    Operating current advanced production systems, including Cyber-Physical Systems, often requires profound programming skills and configuration knowledge, creating a disconnect between human cognition and system operations. To address this, we suggest developing cognitive algorithms that can simulate and anticipate teaming partners' cognitive processes, enhancing and smoothing collaboration in problem-solving processes. Our proposed solution entails creating a cognitive system that minimizes human cognitive load and stress by developing models reflecting humans individual problem-solving capabilities and potential cognitive states. Further, we aim to devise algorithms that simulate individual decision processes and virtual bargaining procedures that anticipate actions, adjusting the system’s behavior towards efficient goal-oriented outcomes. Future steps include the development of benchmark sets tailored for specific use cases and human-system interactions. We plan to refine and test algorithms for detecting and inferring cognitive states of human partners. This process requires incorporating theoretical approaches and adapting existing algorithms to simulate and predict human cognitive processes of problem-solving with regards to cognitive states. The objective is to develop cognitive and computational models that enable production systems to become equal team members alongside humans in diverse scenarios, paving the way for more efficient, effective goal-oriented solutions
    • …
    corecore