People’s trust in algorithmic decision-making systems in health : a comparison between ADA health and IBM Watson

Abstract

Algorithmic decision-making systems (ADMs) support an ever-growing number of decision- making processes. We conducted an online survey study in [omitted] (N=1082) to understand how lay people perceive and trust health ADMs. Inspired by the traditional Ability, Benevolence, and Integrity trust model (Mayer et al., 1995), this study investigated how trust is constructed in health ADMs. In addition, we investigated how trust construction differs between ADA Health (a chatbot that suggests a diagnosis) and IBM Watson (a system that suggest treatments for cancer). Our results show that perceptions of accuracy, fairness and control significantly differ between both contexts. Accuracy and fairness play the biggest role in predicting trust for both ADMs. Control plays a smaller, yet significant, role. Interestingly, control and accuracy play a bigger role in explaining trust for ADA Health than for IBM Watson. Moreover, goal appropriateness and AI concern proof to be good predictors for accuracy, fairness, and control. These results exemplify the importance to take broader contextual, algorithmic, and case specific characteristics into account when investigating trust construction in ADMs

Similar works

Full text

thumbnail-image

Ghent University Academic Bibliography

redirect
Last time updated on 05/02/2024

This paper was published in Ghent University Academic Bibliography.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.