5,890 research outputs found
Recommended from our members
Investigating the Intelligibility of a Computer Vision System for Blind Users
Computer vision systems to help blind usersare becoming increasingly common yet often these systems are not intelligible. Our work investigates the intelligibility of a wearable computer vision system to help blind users locate and identify people in their vicinity. Providing a continuous stream of information, this system allows us to explore intelligibility through interaction and instructions, going beyond studies of intelligibility that focus on explaining a decision a computer vision system might make. In a study with 13 blind users, we explored whether varying instructions (either basic or enhanced) about how the system worked would change blind users’ experience of the system. We found offering a more detailed set of instructions did not affect how successful users were using the system nor their perceived workload. We did, however, find evidence of significant differences in what they knew about the system, and they employed different, and potentially more effective, use strategies. Our findings have important implications for researchers and designers of computer vision systemsfor blind users, as well more general implications for understanding what it means to make interactive computer vision systems intelligible
Recommended from our members
Too much, too little, or just right? Ways explanations impact end users' mental models
Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly "debug" an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users' mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all
Recommended from our members
ExSS 2018: Workshop on explainable smart systems
Smart systems that apply complex reasoning to make decisions and plan behavior are often difficult for users to understand. While research to make systems more explainable and therefore more intelligible and transparent is gaining pace, there are numerous issues and problems regarding these systems that demand further attention. The goal of this workshop is to bring academia and industry together to address these issues. The workshop includes a keynote, poster panels, and group activities, towards developing concrete approaches to handling challenges related to the design, development, and evaluation of explainable smart systems
Generating Explanations of Robot Policies in Continuous State Spaces
Transparency in HRI describes the method of making the current state of a robot or intelligent agent understandable to a human user. Applying transparency mechanisms to robots improves the quality of interaction as well as the user experience.
Explanations are an effective way to make a robot’s decision making transparent. We introduce a framework that uses natural language labels attached to a region in the continuous state space of the robot to automatically generate local explanations of a robot’s policy.
We conducted a pilot study and investigated how the generated explanations helped users to understand and reproduce a robot policy in a debugging scenario
Recommended from our members
Horses for courses: Making the case for persuasive engagement in smart systems
Current thrusts in explainable AI (XAI) have focused on using interpretability or explanatory debugging as frameworks for developing explanations. We argue that for some systems a different paradigm – persuasive engagement – needs to be adopted, in order to affect trust and user satisfaction. In this paper, we will briefly provide an overview of the current approaches to explain smart systems and their scope of application. We then introduce the theoretical basis for persuasive engagement, and show through a use case how explanations might be generated. We then discuss future work that might shed more light on how to best explain different kinds of smart systems
- …