13 research outputs found
A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores
The increased use of algorithmic predictions in sensitive domains has been
accompanied by both enthusiasm and concern. To understand the opportunities and
risks of these technologies, it is key to study how experts alter their
decisions when using such tools. In this paper, we study the adoption of an
algorithmic tool used to assist child maltreatment hotline screening decisions.
We focus on the question: Are humans capable of identifying cases in which the
machine is wrong, and of overriding those recommendations? We first show that
humans do alter their behavior when the tool is deployed. Then, we show that
humans are less likely to adhere to the machine's recommendation when the score
displayed is an incorrect estimate of risk, even when overriding the
recommendation requires supervisory approval. These results highlight the risks
of full automation and the importance of designing decision pipelines that
provide humans with autonomy.Comment: Accepted at ACM Conference on Human Factors in Computing Systems (ACM
CHI), 202
Learning Representations by Humans, for Humans
We propose a new, complementary approach to interpretability, in which machines are not considered as experts whose role it is to suggest what should be done and why, but rather as advisers. The objective of these models is to communicate to a human decision-maker not what to decide but how to decide. In this way, we propose that machine learning pipelines will be more readily adopted, since they allow a decision-maker to retain agency. Specifically, we develop a framework for learning representations by humans, for humans, in which we learn representations of inputs (‘advice’) that are effective for human decision-making. Representation generating models are trained with humans-in-the-loop, implicitly incorporating the human decision-making model. We show that optimizing for human decision-making rather than accuracy is effective in promoting good decisions in various classification tasks while inherently maintaining a sense of interpretability
Do Explanations Increase the Effectiveness of AI-Crowd Generated Fake News Warnings?
Social media platforms are increasingly deploying complex interventions to help users detect false news. Labeling false news using techniques that combine crowd-sourcing with artificial intelligence (AI) offers a promising way to inform users about potentially low-quality information without censoring content, but also can be hard for users to understand. In this study, we examine how users respond in their sharing intentions to information they are provided about a hypothetical human-AI hybrid system. We ask i) if these warnings increase discernment in social media sharing intentions and ii) if explaining how the labeling system works can boost the effectiveness of the warnings. To do so, we conduct a study (N=1473 Americans) in which participants indicated their likelihood of sharing content. Participants were randomly assigned to a control, a treatment where false content was labeled, or a treatment where the warning labels came with an explanation of how they were generated. We find clear evidence that both treatments increase sharing discernment, and directional evidence that explanations increase the warnings' effectiveness. Interestingly, we do not find that the explanations increase self-reported trust in the warning labels, although we do find some evidence that participants found the warnings with the explanations to be more informative. Together, these results have important implications for designing and deploying transparent misinformation warning labels, and AI-mediated systems more broadly