4 research outputs found

    Foundations of Human-Aware Planning -- A Tale of Three Models

    Get PDF
    abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Goal recognition and deception in path-planning

    Get PDF
    This thesis argues that investigation of goal recognition and deception in the much studied and well-understood context of path-planning reveals nuances to both problems that have previously gone unnoticed. Contemporary goal recognition systems rely on examination of multiple observations to calculate a probability distribution across goals. The first part of this thesis demonstrates that a distribution with identical rankings to current stateof-the-art can be achieved without any observations apart from a known starting point (such as a door or gate) and where the agent is now. It also presents a closed formula to calculate a radius around any goal of interest within which that goal is guaranteed to be the most probable, without having to calculate any actual probability values. In terms of deception, traditionally there are two strategies: dissimulation (hiding the true) and simulation (showing the false). The second part of this thesis shows that current state-of-the-art goal recognition systems do not cope well with dissimulation that does its work by ‘dazzling’ (i.e., obfuscating with hugely suboptimal plans). It presents an alternative, self-modulating formula that modifies its output when it encounters suboptimality, seeming to ‘know that it does not know’ instead of ‘keep changing its mind’. Deception is often regarded as a ‘yes, no’ proposition (either the target is deceived or they are not). Furthermore, intuitively, deceptive path-planning involves suboptimality and must, therefore, be expensive. This thesis, however, presents a model of deception for path-planning domains within which it is possible (a) to rank paths by their potential to deceive and (b) to generate deceptive paths that are ‘optimally deceptive’ (i.e., deceptive to the maximum extent at the lowest cost)
    corecore