10 research outputs found

    A Fast Goal Recognition Technique Based on Interaction Estimates

    Get PDF
    Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results

    An LP-Based Approach for Goal Recognition as Planning

    Full text link
    Goal recognition aims to recognize the set of candidate goals that are compatible with the observed behavior of an agent. In this paper, we develop a method based on the operator-counting framework that efficiently computes solutions that satisfy the observations and uses the information generated to solve goal recognition tasks. Our method reasons explicitly about both partial and noisy observations: estimating uncertainty for the former, and satisfying observations given the unreliability of the sensor for the latter. We evaluate our approach empirically over a large data set, analyzing its components on how each can impact the quality of the solutions. In general, our approach is superior to previous methods in terms of agreement ratio, accuracy, and spread. Finally, our approach paves the way for new research on combinatorial optimization to solve goal recognition tasks.Comment: 8 pages, 4 tables, 3 figures. Published in AAAI 2021. Updated final authorship and tex

    A Unified Framework for Planning in Adversarial and Cooperative Environments

    Full text link
    Users of AI systems may rely upon them to produce plans for achieving desired objectives. Such AI systems should be able to compute obfuscated plans whose execution in adversarial situations protects privacy, as well as legible plans which are easy for team members to understand in cooperative situations. We develop a unified framework that addresses these dual problems by computing plans with a desired level of comprehensibility from the point of view of a partially informed observer. For adversarial settings, our approach produces obfuscated plans with observations that are consistent with at least k goals from a set of decoy goals. By slightly varying our framework, we present an approach for goal legibility in cooperative settings which produces plans that achieve a goal while being consistent with at most j goals from a set of confounding goals. In addition, we show how the observability of the observer can be controlled to either obfuscate or clarify the next actions in a plan when the goal is known to the observer. We present theoretical results on the complexity analysis of our problems. We demonstrate the execution of obfuscated and legible plans in a cooking domain using a physical robot Fetch. We also provide an empirical evaluation to show the feasibility and usefulness of our approaches using IPC domains.Comment: 8 pages, 2 figure

    Foundations of Human-Aware Planning -- A Tale of Three Models

    Get PDF
    abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Goal recognition and deception in path-planning

    Get PDF
    This thesis argues that investigation of goal recognition and deception in the much studied and well-understood context of path-planning reveals nuances to both problems that have previously gone unnoticed. Contemporary goal recognition systems rely on examination of multiple observations to calculate a probability distribution across goals. The first part of this thesis demonstrates that a distribution with identical rankings to current stateof-the-art can be achieved without any observations apart from a known starting point (such as a door or gate) and where the agent is now. It also presents a closed formula to calculate a radius around any goal of interest within which that goal is guaranteed to be the most probable, without having to calculate any actual probability values. In terms of deception, traditionally there are two strategies: dissimulation (hiding the true) and simulation (showing the false). The second part of this thesis shows that current state-of-the-art goal recognition systems do not cope well with dissimulation that does its work by ‘dazzling’ (i.e., obfuscating with hugely suboptimal plans). It presents an alternative, self-modulating formula that modifies its output when it encounters suboptimality, seeming to ‘know that it does not know’ instead of ‘keep changing its mind’. Deception is often regarded as a ‘yes, no’ proposition (either the target is deceived or they are not). Furthermore, intuitively, deceptive path-planning involves suboptimality and must, therefore, be expensive. This thesis, however, presents a model of deception for path-planning domains within which it is possible (a) to rank paths by their potential to deceive and (b) to generate deceptive paths that are ‘optimally deceptive’ (i.e., deceptive to the maximum extent at the lowest cost)
    corecore