Explainable Recommendations and Calibrated Trust: Two Systematic User Errors

Abstract

The increased adoption of collaborative human-artificial intelligence decision-making tools triggered a need to explain recommendations for safe and effective collaboration. We explore how users interact with explanations and why trust-calibration errors occur, taking clinical decision-support systems as a case study

    Similar works

    Full text

    thumbnail-image

    Available Versions