Algorithmic decision-making systems (ADMs) support an ever-growing number of decision-
making processes. We conducted an online survey study in [omitted] (N=1082) to understand
how lay people perceive and trust health ADMs. Inspired by the traditional Ability,
Benevolence, and Integrity trust model (Mayer et al., 1995), this study investigated how trust
is constructed in health ADMs. In addition, we investigated how trust construction differs
between ADA Health (a chatbot that suggests a diagnosis) and IBM Watson (a system that
suggest treatments for cancer). Our results show that perceptions of accuracy, fairness and
control significantly differ between both contexts. Accuracy and fairness play the biggest role
in predicting trust for both ADMs. Control plays a smaller, yet significant, role. Interestingly,
control and accuracy play a bigger role in explaining trust for ADA Health than for IBM
Watson. Moreover, goal appropriateness and AI concern proof to be good predictors for
accuracy, fairness, and control. These results exemplify the importance to take broader
contextual, algorithmic, and case specific characteristics into account when investigating trust
construction in ADMs
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.