2 research outputs found
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Explainable AI provides insight into the "why" for model predictions,
offering potential for users to better understand and trust a model, and to
recognize and correct AI predictions that are incorrect. Prior research on
human and explainable AI interactions has focused on measures such as
interpretability, trust, and usability of the explanation. Whether explainable
AI can improve actual human decision-making and the ability to identify the
problems with the underlying model are open questions. Using real datasets, we
compare and evaluate objective human decision accuracy without AI (control),
with an AI prediction (no explanation), and AI prediction with explanation. We
find providing any kind of AI prediction tends to improve user decision
accuracy, but no conclusive evidence that explainable AI has a meaningful
impact. Moreover, we observed the strongest predictor for human decision
accuracy was AI accuracy and that users were somewhat able to detect when the
AI was correct versus incorrect, but this was not significantly affected by
including an explanation. Our results indicate that, at least in some
situations, the "why" information provided in explainable AI may not enhance
user decision-making, and further research may be needed to understand how to
integrate explainable AI into real systems