5 research outputs found

    Towards Providing Explanations for AI Planner Decisions

    Get PDF
    In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework

    `Why didn't you allocate this task to them?' Negotiation-Aware Task Allocation and Contrastive Explanation Generation

    Full text link
    Task-allocation is an important problem in multi-agent systems. It becomes more challenging when the team-members are humans with imperfect knowledge about their teammates' costs and the overall performance metric. While distributed task-allocation methods let the team-members engage in iterative dialog to reach a consensus, the process can take a considerable amount of time and communication. On the other hand, a centralized method that simply outputs an allocation may result in discontented human team-members who, due to their imperfect knowledge and limited computation capabilities, perceive the allocation to be unfair. To address these challenges, we propose a centralized Artificial Intelligence Task Allocation (AITA) that simulates a negotiation and produces a negotiation-aware task allocation that is fair. If a team-member is unhappy with the proposed allocation, we allow them to question the proposed allocation using a counterfactual. By using parts of the simulated negotiation, we are able to provide contrastive explanations that providing minimum information about other's costs to refute their foil. With human studies, we show that (1) the allocation proposed using our method does indeed appear fair to the majority, and (2) when a counterfactual is raised, explanations generated are easy to comprehend and convincing. Finally, we empirically study the effect of different kinds of incompleteness on the explanation-length and find that underestimation of a teammate's costs often increases it.Comment: First two authors are equal contributor

    Foundations of Human-Aware Planning -- A Tale of Three Models

    Get PDF
    abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    corecore