10 research outputs found
Performance Gaps of Artificial Intelligence Models Screening Mammography -- Towards Fair and Interpretable Models
Even though deep learning models for abnormality classification can perform
well in screening mammography, the demographic and imaging characteristics
associated with increased risk of failure for abnormality classification in
screening mammograms remain unclear. This retrospective study used data from
the Emory BrEast Imaging Dataset (EMBED) including mammograms from 115,931
patients imaged at Emory University Healthcare between 2013 to 2020. Clinical
and imaging data includes Breast Imaging Reporting and Data System (BI-RADS)
assessment, region of interest coordinates for abnormalities, imaging features,
pathologic outcomes, and patient demographics. Deep learning models including
InceptionV3, VGG16, ResNet50V2, and ResNet152V2 were developed to distinguish
between patches of abnormal tissue and randomly selected patches of normal
tissue from the screening mammograms. The distributions of the training,
validation and test sets are 29,144 (55.6%) patches of 10,678 (54.2%) patients,
9,910 (18.9%) patches of 3,609 (18.3%) patients, and 13,390 (25.5%) patches of
5,404 (27.5%) patients. We assessed model performance overall and within
subgroups defined by age, race, pathologic outcome, and imaging characteristics
to evaluate reasons for misclassifications. On the test set, a ResNet152V2
model trained to classify normal versus abnormal tissue patches achieved an
accuracy of 92.6% (95%CI=92.0-93.2%), and area under the receiver operative
characteristics curve 0.975 (95%CI=0.972-0.978). Imaging characteristics
associated with higher misclassifications of images include higher tissue
densities (risk ratio [RR]=1.649; p=.010, BI-RADS density C and RR=2.026;
p=.003, BI-RADS density D), and presence of architectural distortion (RR=1.026;
p<.001). Small but statistically significant differences in performance were
observed by age, race, pathologic outcome, and other imaging features (p<.001).Comment: 21 pages, 4 tables, 5 figures, 2 supplemental table and 1
supplemental figur
Impact of multi-source data augmentation on performance of convolutional neural networks for abnormality classification in mammography
IntroductionTo date, most mammography-related AI models have been trained using either film or digital mammogram datasets with little overlap. We investigated whether or not combining film and digital mammography during training will help or hinder modern models designed for use on digital mammograms.MethodsTo this end, a total of six binary classifiers were trained for comparison. The first three classifiers were trained using images only from Emory Breast Imaging Dataset (EMBED) using ResNet50, ResNet101, and ResNet152 architectures. The next three classifiers were trained using images from EMBED, Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), and Digital Database for Screening Mammography (DDSM) datasets. All six models were tested only on digital mammograms from EMBED.ResultsThe results showed that performance degradation to the customized ResNet models was statistically significant overall when EMBED dataset was augmented with CBIS-DDSM/DDSM. While the performance degradation was observed in all racial subgroups, some races are subject to more severe performance drop as compared to other races.DiscussionThe degradation may potentially be due to (
1) a mismatch in features between film-based and digital mammograms (
2) a mismatch in pathologic and radiological information. In conclusion, use of both film and digital mammography during training may hinder modern models designed for breast cancer screening. Caution is required when combining film-based and digital mammograms or when utilizing pathologic and radiological information simultaneously
Towards Trustworthy Artificial Intelligence for Equitable Global Health
Artificial intelligence (AI) can potentially transform global health, but
algorithmic bias can exacerbate social inequities and disparity. Trustworthy AI
entails the intentional design to ensure equity and mitigate potential biases.
To advance trustworthy AI in global health, we convened a workshop on Fairness
in Machine Intelligence for Global Health (FairMI4GH). The event brought
together a global mix of experts from various disciplines, community health
practitioners, policymakers, and more. Topics covered included managing AI bias
in socio-technical systems, AI's potential impacts on global health, and
balancing data privacy with transparency. Panel discussions examined the
cultural, political, and ethical dimensions of AI in global health. FairMI4GH
aimed to stimulate dialogue, facilitate knowledge transfer, and spark
innovative solutions. Drawing from NIST's AI Risk Management Framework, it
provided suggestions for handling AI risks and biases. The need to mitigate
data biases from the research design stage, adopt a human-centered approach,
and advocate for AI transparency was recognized. Challenges such as updating
legal frameworks, managing cross-border data sharing, and motivating developers
to reduce bias were acknowledged. The event emphasized the necessity of diverse
viewpoints and multi-dimensional dialogue for creating a fair and ethical AI
framework for equitable global health.Comment: 7 page
AI recognition of patient race in medical imaging: a modelling study
Background
Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images.
Methods
Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race.
Findings
In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study.
Interpretation
The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging.
Funding
National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology
Reading Race: AI Recognises Patient's Racial Identity In Medical Images
Background: In medical imaging, prior studies have demonstrated disparate AI performance by race, yet there is no known correlation for race on medical imaging that would be obvious to the human expert interpreting the images.
Methods: Using private and public datasets we evaluate: A) performance quantification of deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognize race.
Findings: Standard deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities. Our findings hold under external validation conditions, as well as when models are optimized to perform clinically motivated tasks. We demonstrate this detection is not due to trivial proxies or imaging-related surrogate covariates for race, such as underlying disease distribution. Finally, we show that performance persists over all anatomical regions and frequency spectrum of the images suggesting that mitigation efforts will be challenging and demand further study.
Interpretation: We emphasize that model ability to predict self-reported race is itself not the issue of importance. However, our findings that AI can trivially predict self-reported race -- even from corrupted, cropped, and noised medical images -- in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to
Localisation of Racial Information in Chest X-Ray for Deep Learning Diagnosis
Deep learning-based classification of diseases from Chest X-ray has been shown to use implicit information about the subjects' self-reported race, which could result in diagnostic bias. In this paper, we describe and compare two approaches to investigate where racial information is located in the image: first leveraging non-linear registration and computing atlas differences and second using saliency maps. We compute a map visualising the racial information between black and white subjects and discuss whether those maps are consistent with the model explanation.</p
“Shortcuts” Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation
Despite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various subgroups limit the usefulness of AI in improving patients' lives. Many definitions of fairness have been proposed, with discussions of various tensions that arise in the choice of an appropriate metric to use to evaluate bias; for example, should one aim for individual or group fairness? One central observation is that AI models apply "shortcut learning" whereby spurious features (such as chest tubes and portable radiographic markers on intensive care unit chest radiography) on medical images are used for prediction instead of identifying true pathology. Moreover, AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. As a result, these subgroups became nonprivileged subgroups. In this review, the authors discuss the various types of bias from shortcut learning that may occur at different phases of AI model development, including data bias, modeling bias, and inference bias. The authors thereafter summarize various tool kits that can be used to evaluate and mitigate bias and note that these have largely been applied to nonmedical domains and require more evaluation for medical AI. The authors then summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning). Ongoing legal changes where the use of a biased model will be penalized highlight the necessity of understanding, detecting, and mitigating biases from shortcut learning and will require diverse research teams looking at the whole AI pipeline