Medical AI and Contextual Bias

Abstract

Artificial intelligence will transform medicine. One particularly attractive possibility is the democratization of medical expertise. If black-box medical algorithms can be trained to match the performance of high-level human experts — to identify malignancies as well as trained radiologists, to diagnose diabetic retinopathy as well as board-certified ophthalmologists, or to recommend tumor-specific courses of treatment as well as top-ranked oncologists — then those algorithms could be deployed in medical settings where human experts are not available, and patients could benefit. But there is a problem with this vision. Privacy law, malpractice, insurance reimbursement, and FDA approval standards all encourage developers to train medical AI in high-resource contexts, such as academic medical centers. And put simply, care is different in high-resource settings than it is in low-resource settings such as community health centers or rural providers in less-developed countries. Patient populations differ, as do the resources available to administer treatment and the resources available to pay for that treatment. This development pattern will lead to decreases in the quality of the algorithm’s recommendations, reflected in problematic care and increased costs. Perniciously, such quality problems in low-resource contexts are likely to go unrecognized for exactly the same reasons that promote algorithmic training in high-resource contexts. Solutions are not trivial. Labeling products the same way that drugs are labeled is unlikely to work, and truly addressing the problem may require a combination of public investment in data to train medical AI and regulatory requirements for cross-context validation. Nevertheless, if black-box medicine is to achieve its goal of bringing excellent medicine to broad sets of patients, the problem of contextual bias should be recognized and addressed sooner rather than later

    Similar works