16 research outputs found

    Human Goal Recognition as Bayesian Inference: Investigating the Impact of Actions, Timing, and Goal Solvability

    Full text link
    Goal recognition is a fundamental cognitive process that enables individuals to infer intentions based on available cues. Current goal recognition algorithms often take only observed actions as input, but here we use a Bayesian framework to explore the role of actions, timing, and goal solvability in goal recognition. We analyze human responses to goal-recognition problems in the Sokoban domain, and find that actions are assigned most importance, but that timing and solvability also influence goal recognition in some cases, especially when actions are uninformative. We leverage these findings to develop a goal recognition model that matches human inferences more closely than do existing algorithms. Our work provides new insight into human goal recognition and takes a step towards more human-like AI models.Comment: Accepted by AAMAS 202

    The Unfortunate-Flow Problem

    Get PDF
    In the traditional maximum-flow problem, the goal is to transfer maximum flow in a network by directing, in each vertex in the network, incoming flow into outgoing edges. The problem is one of the most fundamental problems in TCS, with application in numerous domains. The fact a maximal-flow algorithm directs the flow in all the vertices of the network corresponds to a setting in which the authority has control in all vertices. Many applications in which the maximal-flow problem is applied involve an adversarial setting, where the authority does not have such a control. We introduce and study the unfortunate flow problem, which studies the flow that is guaranteed to reach the target when the edges that leave the source are saturated, yet the most unfortunate decisions are taken in the vertices. When the incoming flow to a vertex is greater than the outgoing capacity, flow is lost. The problem models evacuation scenarios where traffic is stuck due to jams in junctions and communication networks where packets are dropped in overloaded routers. We study the theoretical properties of unfortunate flows, show that the unfortunate-flow problem is co-NP-complete and point to polynomial fragments. We introduce and study interesting variants of the problem: integral unfortunate flow, where the flow along edges must be integral, controlled unfortunate flow, where the edges from the source need not be saturated and may be controlled, and no-loss controlled unfortunate flow, where the controlled flow must not be lost

    Flow Games

    Get PDF
    In the traditional maximal-flow problem, the goal is to transfer maximum flow in a network by directing, in each vertex in the network, incoming flow into outgoing edges. While the problem has been extensively used in order to optimize the performance of networks in numerous application areas, it corresponds to a setting in which the authority has control on all vertices of the network. Today\u27s computing environment involves parties that should be considered adversarial. We introduce and study {em flow games}, which capture settings in which the authority can control only part of the vertices. In these games, the vertices are partitioned between two players: the authority and the environment. While the authority aims at maximizing the flow, the environment need not cooperate. We argue that flow games capture many modern settings, such as partially-controlled pipe or road systems or hybrid software-defined communication networks. We show that the problem of finding the maximal flow as well as an optimal strategy for the authority in an acyclic flow game is Sigma2PSigma_2^P-complete, and is already Sigma2PSigma_2^P-hard to approximate. We study variants of the game: a restriction to strategies that ensure no loss of flow, an extension to strategies that allow non-integral flows, which we prove to be stronger, and a dynamic setting in which a strategy for a vertex is chosen only once flow reaches the vertex. We discuss additional variants and their applications, and point to several interesting open problems

    Optimizing a Model-Agnostic Measure of Graph Counterdeceptiveness via Reattachment

    Full text link
    Recognition of an adversary's objective is a core problem in physical security and cyber defense. Prior work on target recognition focuses on developing optimal inference strategies given the adversary's operating environment. However, the success of such strategies significantly depends on features of the environment. We consider the problem of optimal counterdeceptive environment design: construction of an environment which promotes early recognition of an adversary's objective, given operational constraints. Interpreting counterdeception as a question of graph design with a bound on total edge length, we propose a measure of graph counterdeceptiveness and a novel heuristic algorithm for maximizing counterdeceptiveness based on iterative reattachment of trees. We benchmark the performance of this algorithm on synthetic networks as well as a graph inspired by a real-world high-security environment, verifying that the proposed algorithm is computationally feasible and yields meaningful network designs.Comment: 15 pages, 11 figure

    Landmark-based approaches for goal recognition as planning

    Get PDF
    This article is a revised and extended version of two papers published at AAAI 2017 (Pereira et al., 2017b) and ECAI 2016 (Pereira and Meneguzzi, 2016). We thank the anonymous reviewers that helped improve the research in this article. The authors thank Shirin Sohrabi for discussing the way in which the algorithms of Sohrabi et al. (2016) should be configured, and Yolanda Escudero-Martın for providing code for the approach of E-Martın et al. (2015) and engaging with us. We also thank Miquel Ramırez and Mor Vered for various discussions, and Andre Grahl Pereira for a discussion of properties of our algorithm. Felipe thanks CNPq for partial financial support under its PQ fellowship, grant number 305969/2016-1.Peer reviewedPostprin

    Heuristic Online Goal Recognition in Continuous Domains

    Full text link

    A Decentralized Partially Observable Markov Decision Model with Action Duration for Goal Recognition in Real Time Strategy Games

    Get PDF
    Multiagent goal recognition is a tough yet important problem in many real time strategy games or simulation systems. Traditional modeling methods either are in great demand of detailed agentsā€™ domain knowledge and training dataset for policy estimation or lack clear definition of action duration. To solve the above problems, we propose a novel Dec-POMDM-T model, combining the classic Dec-POMDP, an observation model for recognizer, joint goal with its termination indicator, and time duration variables for actions with action termination variables. In this paper, a model-free algorithm named cooperative colearning based on Sarsa is used. Considering that Dec-POMDM-T usually encounters multiagent goal recognition problems with different sorts of noises, partially missing data, and unknown action durations, the paper exploits the SIS PF with resampling for inference under the dynamic Bayesian network structure of Dec-POMDM-T. In experiments, a modified predator-prey scenario is adopted to study multiagent joint goal recognition problem, which is the recognition of the joint target shared among cooperative predators. Experiment results show that (a) Dec-POMDM-T works effectively in multiagent goal recognition and adapts well to dynamic changing goals within agent group; (b) Dec-POMDM-T outperforms traditional Dec-MDP-based methods in terms of precision, recall, and F-measure
    corecore