3 research outputs found
Recommended from our members
Tell me more?: the effects of mental model soundness on personalizing an intelligent agent
What does a user need to know to productively work with an intelligent agent? Intelligent agents and recommender systems are gaining widespread use, potentially creating a need for end users to understand how these systems operate in order to fix their agent's personalized behavior. This paper explores the effects of mental model soundness on such personalization by providing structural knowledge of a music recommender system in an empirical study. Our findings show that participants were able to quickly build sound mental models of the recommender system's reasoning, and that participants who most improved their mental models during the study were significantly more likely to make the recommender operate to their satisfaction. These results suggest that by helping end users understand a system's reasoning, intelligent agents may elicit more and better feedback, thus more closely aligning their output with each user's intentions
Recommended from our members
Too much, too little, or just right? Ways explanations impact end users' mental models
Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly "debug" an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users' mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all
An exploratory study to design constrained engagement in smart heating systems
Smart heating systems that leverage complex models of user preferences and energy consumption within the home and the wider network in order to make intelligent heating decisions have started to be adopted in homes. While heating systems that allow the user to directly manipulate the heating schedule and temperature have been investigated in some detail, little is known about how to strike a balance between encouraging users to interact with the system but not to demand too much of their attention, what research has termed "constrained engagement" with calm technology. In this exploratory study, we investigated how participants responded to a number of scenarios involving a novel smart heating system in order to support controllability, intelligibility and user experience as part of a constrained engagement approach. We focused in particular on when participants wanted to engage with the smart heating system and how explanations from the system could influence user engagement. Our study contributes a better understanding of users' expectations towards smart heating systems that can form the basis of improved user interfaces