585 research outputs found
Recommended from our members
Tell me more?: the effects of mental model soundness on personalizing an intelligent agent
What does a user need to know to productively work with an intelligent agent? Intelligent agents and recommender systems are gaining widespread use, potentially creating a need for end users to understand how these systems operate in order to fix their agent's personalized behavior. This paper explores the effects of mental model soundness on such personalization by providing structural knowledge of a music recommender system in an empirical study. Our findings show that participants were able to quickly build sound mental models of the recommender system's reasoning, and that participants who most improved their mental models during the study were significantly more likely to make the recommender operate to their satisfaction. These results suggest that by helping end users understand a system's reasoning, intelligent agents may elicit more and better feedback, thus more closely aligning their output with each user's intentions
Software Usability
This volume delivers a collection of high-quality contributions to help broaden developersâ and non-developersâ minds alike when it comes to considering software usability. It presents novel research and experiences and disseminates new ideas accessible to people who might not be software makers but who are undoubtedly software users
Understanding Adoption Barriers to Dwell-Free Eye-Typing: Design Implications from a Qualitative Deployment Study and Computational Simulations
Eye-typing is a slow and cumbersome text entry method typically used by individuals with no other practical means of communication. As an alternative, prior HCI research has proposed dwell-free eye-typing as a potential improvement that eliminates time-consuming and distracting dwell-timeouts. However, it is rare that such research ideas are translated into working products. This paper reports on a qualitative deployment study of a product that was developed to allow users access to a dwell-free eye-typing research solution. This allowed us to understand how such a research solution would work in practice, as part of users\u27 current communication solutions in their own homes. Based on interviews and observations, we discuss a number of design issues that currently act as barriers preventing widespread adoption of dwell-free eye-typing. The study findings are complemented with computational simulations in a range of conditions that were inspired by the findings in the deployment study. These simulations serve to both contextualize the qualitative findings and to explore quantitative implications of possible interface redesigns. The combined analysis gives rise to a set of design implications for enabling wider adoption of dwell-free eye-typing in practice
Coping and resilience among women undergoing assisted reproductive therapies
This study aimed to provide a theoretical model of resilience among women undergoing fertility treatments, who experience repeated unsuccessful conception attempts.
A qualitative study using a Grounded Theory approach was used and women living in the UK who self-identified as having fertility difficulties were recruited online. Eleven women aged between 24 and 41 years, undergoing various assisted reproductive treatments, took part in individual semi-structured interviews around their experiences of living through unsuccessful fertility treatment attempts. Interviews were audio recorded, transcribed, and subsequently analysed using the Grounded Theory methodology.
Three core categories were identified; âAppraisalâ; âStepping away from treatmentâ and âBuilding self-up for next attemptâ. Participants demonstrated their resilience by taking steps to build up their resources in preparation for next conception attempts, by nurturing their strength and taking control of their fertility experience. Those who had depleted their resources through the cycle of attempting pregnancy had taken a step back from the treatment cycle to reconnect with themselves, before attempting conception again.
The study concludes that women undergoing fertility treatment demonstrate their resilience through a variety of actions that enable them to continue to pursue their pregnancy goal. Clinical staff should be mindful of their clientâs need to withdraw from the treatment cycle and offer support to enable women to do this. Further research should aim to explore resilience among women from diverse ethnic backgrounds
Explaining Reinforcement Learning to Mere Mortals: An Empirical Study
We present a user study to investigate the impact of explanations on
non-experts' understanding of reinforcement learning (RL) agents. We
investigate both a common RL visualization, saliency maps (the focus of
attention), and a more recent explanation type, reward-decomposition bars
(predictions of future types of rewards). We designed a 124 participant,
four-treatment experiment to compare participants' mental models of an RL agent
in a simple Real-Time Strategy (RTS) game. Our results show that the
combination of both saliency and reward bars were needed to achieve a
statistically significant improvement in mental model score over the control.
In addition, our qualitative analysis of the data reveals a number of effects
for further study.Comment: 7 page
Understanding the Role of Explanations in Computer Vision Applications
Recent advancements in AI show great performance over a range of applications, but its operations are hard to interpret, even for experts. Various explanation algorithms have been proposed to address this issue, yet limited research effort has been reported concerning their user evaluation.
Against this background, this thesis reports on four user studies designed to investigate the role of explanations in helping end-users build a better functional understanding of computer vision processes. In addition, we seek to understand what features lay users attend to in order to build such functional understanding, and whether different techniques provide different gains. In particular, we begin by examining the utility of "keypoint markers"; coloured dot visualisations that correspond to patterns of interest identified by an underlying algorithm and can be seen in many computer vision applications. We then investigate the utility of saliency maps; a popular group of explanations for the operation of Convolutional Neural Networks (CNNs).
The findings indicate that keypoint markers can be helpful if they are presented in line with users' expectations. They also indicate that saliency maps can improve participants' ability to predict the outcome of a CNN, but only moderately. Overall, this thesis contributes by evaluating these explanation techniques through user studies. It also provides a number of key findings that provide helpful guidelines for practitioners on how and when to use these explanations, as well as which types of users to target. Furthermore, it proposes and evaluates two novel explanation techniques as well as a number of helpful tools that help researchers and practitioners when designing user studies around the evaluation of explanations. Finally, this thesis highlights a number of implications for the design of explanation techniques and further research in that area
How Do UX Practitioners Communicate AI as a Design Material? Artifacts, Conceptions, and Propositions
UX practitioners (UXPs) face novel challenges when working with and
communicating artificial intelligence (AI) as a design material. We explore how
UXPs communicate AI concepts when given hands-on experience training and
experimenting with AI models. To do so, we conducted a task-based design study
with 27 UXPs in which they prototyped and created a design presentation for a
AI-enabled interface while having access to a simple AI model training tool.
Through analyzing UXPs' design presentations and post-activity interviews, we
found that although UXPs struggled to clearly communicate some AI concepts,
tinkering with AI broadened common ground when communicating with technical
stakeholders. UXPs also identified key risks and benefits of AI in their
designs, and proposed concrete next steps for both UX and AI work. We conclude
with a sensitizing concept and recommendations for design and AI tools to
enhance multi-stakeholder communication and collaboration when crafting
human-centered AI experiences
Recommended from our members
Assessing and Finding Faults in AI: Two Empirical Studies
With the advent of Artificial Intelligence (AI) in every sphere of life in today's day and age, it has become increasingly important for non-AI experts to be able to comprehend the underlying logic of how AI systems work, assess them and find faults in these systems, particularly when they are used in high risk scenarios such as in military strategies and medical applications. Recent developments to address the need to open the black boxes of these AI-powered systems have led to the emergence of AI explanations. There now exist myriad successful explanation methods and tools that attempt to explore and explain how AI systems work. However, a key problem with such work is the lack of a process that users can follow to navigate AI systems along with their explanation. This problem becomes increasingly evident with non-AI experts, due to their lack of context and depth of knowledge of the subject. To address this challenging problem, my colleagues and I propose a new process called AAR/AI or After-Action Review for Artificial Intelligence that aims to bridge this gap between AI systems and non-AI experts. AAR/AI, inspired by the US Defense debriefing strategy called AAR, is a process for understanding, analyzing and navigating sequential decision making environments. This thesis details two human-subjects studies my colleagues and I conducted, one qualitatively and the other quantitatively, to evaluate the effectiveness of AAR/AI in assessing an AI system and in identifying and localizing faults in it. The studies recommend that not only does AAR/AI assist non-AI experts to effectively navigate an AI system and keep their thoughts organized and logical, it also helps them identify and localize faults in it. Participants that used AAR/AI to localize faults did so with far more precision and recall than those that did not. I believe that this is a crucial step towards building democratic and explainable AI systems, and making them accessible to a larger audience that is not familiar with them
- âŠ