AI and XAI second opinion: the danger of false confirmation in human–AI collaboration

Abstract

Can AI substitute a human physician’s second opinion? Recently the Journal of Medical Ethics published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician’s while Jongsma and Sand argue for a second human opinion irrespective of AI’s concurrence or dissent. The crux of this debate hinges on the prevalence and impact of ‘false confirmation’—a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician–AI collaborations

Similar works

Full text

thumbnail-image

Archivio istituzionale della Ricerca - Bocconi

redirect
Last time updated on 10/05/2025

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.