9,212 research outputs found
Explainable Planning
As AI is increasingly being adopted into application solutions, the challenge
of supporting interaction with humans is becoming more apparent. Partly this is
to support integrated working styles, in which humans and intelligent systems
cooperate in problem-solving, but also it is a necessary step in the process of
building trust as humans migrate greater responsibility to such systems. The
challenge is to find effective ways to communicate the foundations of AI-driven
behaviour, when the algorithms that drive it are far from transparent to
humans. In this paper we consider the opportunities that arise in AI planning,
exploiting the model-based representations that form a familiar and common
basis for communication with users, while acknowledging the gap between
planning algorithms and human problem-solving.Comment: Presented at the IJCAI-17 workshop on Explainable AI
(http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/). Melbourne,
August 201
Robot Mindreading and the Problem of Trust
This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot
- …