3,550 research outputs found
Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification
Deep neural networks have led to state-of-the-art results in many medical imaging tasks including Alzheimer’s disease (AD) detection based on structural magnetic resonance imaging (MRI) data. However, the network decisions are often perceived as being highly non-transparent, making it difficult to apply these algorithms in clinical routine. In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data. Similarly to other visualization methods, LRP produces a heatmap in the input space indicating the importance/relevance of each voxel contributing to the final classification outcome. In contrast to susceptibility maps produced by guided backpropagation (“Which change in voxels would change the outcome most?”), the LRP method is able to directly highlight positive contributions to the network classification in the input space. In particular, we show that (1) the LRP method is very specific for individuals (“Why does this person have AD?”) with high inter-patient variability, (2) there is very little relevance for AD in healthy controls and (3) areas that exhibit a lot of relevance correlate well with what is known from literature. To quantify the latter, we compute size-corrected metrics of the summed relevance per brain area, e.g., relevance density or relevance gain. Although these metrics produce very individual “fingerprints” of relevance patterns for AD patients, a lot of importance is put on areas in the temporal lobe including the hippocampus. After discussing several limitations such as sensitivity toward the underlying model and computation parameters, we conclude that LRP might have a high potential to assist clinicians in explaining neural network decisions for diagnosing AD (and potentially other diseases) based on structural MRI data
Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation
Machine learning-based imaging diagnostics has recently reached or even
superseded the level of clinical experts in several clinical domains. However,
classification decisions of a trained machine learning system are typically
non-transparent, a major hindrance for clinical integration, error tracking or
knowledge discovery. In this study, we present a transparent deep learning
framework relying on convolutional neural networks (CNNs) and layer-wise
relevance propagation (LRP) for diagnosing multiple sclerosis (MS). MS is
commonly diagnosed utilizing a combination of clinical presentation and
conventional magnetic resonance imaging (MRI), specifically the occurrence and
presentation of white matter lesions in T2-weighted images. We hypothesized
that using LRP in a naive predictive model would enable us to uncover relevant
image features that a trained CNN uses for decision-making. Since imaging
markers in MS are well-established this would enable us to validate the
respective CNN model. First, we pre-trained a CNN on MRI data from the
Alzheimer's Disease Neuroimaging Initiative (n = 921), afterwards specializing
the CNN to discriminate between MS patients and healthy controls (n = 147).
Using LRP, we then produced a heatmap for each subject in the holdout set
depicting the voxel-wise relevance for a particular classification decision.
The resulting CNN model resulted in a balanced accuracy of 87.04% and an area
under the curve of 96.08% in a receiver operating characteristic curve. The
subsequent LRP visualization revealed that the CNN model focuses indeed on
individual lesions, but also incorporates additional information such as lesion
location, non-lesional white matter or gray matter areas such as the thalamus,
which are established conventional and advanced MRI markers in MS. We conclude
that LRP and the proposed framework have the capability to make diagnostic
decisions of..
Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer's Disease
Visualizing and interpreting convolutional neural networks (CNNs) is an
important task to increase trust in automatic medical decision making systems.
In this study, we train a 3D CNN to detect Alzheimer's disease based on
structural MRI scans of the brain. Then, we apply four different gradient-based
and occlusion-based visualization methods that explain the network's
classification decisions by highlighting relevant areas in the input image. We
compare the methods qualitatively and quantitatively. We find that all four
methods focus on brain regions known to be involved in Alzheimer's disease,
such as inferior and middle temporal gyrus. While the occlusion-based methods
focus more on specific regions, the gradient-based methods pick up distributed
relevance patterns. Additionally, we find that the distribution of relevance
varies across patients, with some having a stronger focus on the temporal lobe,
whereas for others more cortical areas are relevant. In summary, we show that
applying different visualization methods is important to understand the
decisions of a CNN, a step that is crucial to increase clinical impact and
trust in computer-based decision support systems.Comment: MLCN 201
Prostate Cancer Nodal Staging: Using Deep Learning to Predict 68Ga-PSMA-Positivity from CT Imaging Alone
Lymphatic spread determines treatment decisions in prostate cancer (PCa) patients. 68Ga-PSMA-PET/CT can be performed, although cost remains high and availability is limited. Therefore, computed tomography (CT) continues to be the most used modality for PCa staging. We assessed if convolutional neural networks (CNNs) can be trained to determine 68Ga-PSMA-PET/CT-lymph node status from CT alone. In 549 patients with 68Ga-PSMA PET/CT imaging, 2616 lymph nodes were segmented. Using PET as a reference standard, three CNNs were trained. Training sets balanced for infiltration status, lymph node location and additionally, masked images, were used for training. CNNs were evaluated using a separate test set and performance was compared to radiologists' assessments and random forest classifiers. Heatmaps maps were used to identify the performance determining image regions. The CNNs performed with an Area-Under-the-Curve of 0.95 (status balanced) and 0.86 (location balanced, masked), compared to an AUC of 0.81 of experienced radiologists. Interestingly, CNNs used anatomical surroundings to increase their performance, "learning" the infiltration probabilities of anatomical locations. In conclusion, CNNs have the potential to build a well performing CT-based biomarker for lymph node metastases in PCa, with different types of class balancing strongly affecting CNN performance
Recommended from our members
Interpretable classification of Alzheimer's disease pathologies with a convolutional neural network pipeline.
Neuropathologists assess vast brain areas to identify diverse and subtly-differentiated morphologies. Standard semi-quantitative scoring approaches, however, are coarse-grained and lack precise neuroanatomic localization. We report a proof-of-concept deep learning pipeline that identifies specific neuropathologies-amyloid plaques and cerebral amyloid angiopathy-in immunohistochemically-stained archival slides. Using automated segmentation of stained objects and a cloud-based interface, we annotate > 70,000 plaque candidates from 43 whole slide images (WSIs) to train and evaluate convolutional neural networks. Networks achieve strong plaque classification on a 10-WSI hold-out set (0.993 and 0.743 areas under the receiver operating characteristic and precision recall curve, respectively). Prediction confidence maps visualize morphology distributions at high resolution. Resulting network-derived amyloid beta (Aβ)-burden scores correlate well with established semi-quantitative scores on a 30-WSI blinded hold-out. Finally, saliency mapping demonstrates that networks learn patterns agreeing with accepted pathologic features. This scalable means to augment a neuropathologist's ability suggests a route to neuropathologic deep phenotyping
GP-Unet: Lesion Detection from Weak Labels with a 3D Regression Network
We propose a novel convolutional neural network for lesion detection from
weak labels. Only a single, global label per image - the lesion count - is
needed for training. We train a regression network with a fully convolutional
architecture combined with a global pooling layer to aggregate the 3D output
into a scalar indicating the lesion count. When testing on unseen images, we
first run the network to estimate the number of lesions. Then we remove the
global pooling layer to compute localization maps of the size of the input
image. We evaluate the proposed network on the detection of enlarged
perivascular spaces in the basal ganglia in MRI. Our method achieves a
sensitivity of 62% with on average 1.5 false positives per image. Compared with
four other approaches based on intensity thresholding, saliency and class maps,
our method has a 20% higher sensitivity.Comment: Article published in MICCAI 2017. We corrected a few errors from the
first version: padding, loss, typos and update of the DOI numbe
- …