24 research outputs found
Is Texture Predictive for Age and Sex in Brain MRI?
Deep learning builds the foundation for many medical image analysis tasks
where neuralnetworks are often designed to have a large receptive field to
incorporate long spatialdependencies. Recent work has shown that large
receptive fields are not always necessaryfor computer vision tasks on natural
images. We explore whether this translates to certainmedical imaging tasks such
as age and sex prediction from a T1-weighted brain MRI scans.Comment: MIDL 2019 [arXiv:1907.08612
Improving Image-Based Precision Medicine with Uncertainty-Aware Causal Models
Image-based precision medicine aims to personalize treatment decisions based
on an individual's unique imaging features so as to improve their clinical
outcome. Machine learning frameworks that integrate uncertainty estimation as
part of their treatment recommendations would be safer and more reliable.
However, little work has been done in adapting uncertainty estimation
techniques and validation metrics for precision medicine. In this paper, we use
Bayesian deep learning for estimating the posterior distribution over factual
and counterfactual outcomes on several treatments. This allows for estimating
the uncertainty for each treatment option and for the individual treatment
effects (ITE) between any two treatments. We train and evaluate this model to
predict future new and enlarging T2 lesion counts on a large, multi-center
dataset of MR brain images of patients with multiple sclerosis, exposed to
several treatments during randomized controlled trials. We evaluate the
correlation of the uncertainty estimate with the factual error, and, given the
lack of ground truth counterfactual outcomes, demonstrate how uncertainty for
the ITE prediction relates to bounds on the ITE error. Lastly, we demonstrate
how knowledge of uncertainty could modify clinical decision-making to improve
individual patient and clinical trial outcomes
Probabilistic and causal reasoning in deep learning for imaging
Typical machine learning research in the imaging domain occurs in clearly defined environments on clean datasets without considering realistic deployment scenarios. However, applied machine learning systems are exposed to unexpected distribution shifts and still need to produce reliable predictions without relying on spurious correlations. Similarly, such systems encounter ambiguous or unseen inputs and need to communicate their uncertainty. Often, AI systems support a human operator and should provide interpretable explanations of their decisions.
This thesis argues for a probabilistic and causal approach to machine learning that is robust to spurious correlations, improves interpretability, and communicates uncertainty.
First, we investigate the learning abilities of neural networks that are constrained to extracting information from image patches. We show that careful network design can prevent shortcut learning and that restricting the receptive field can improve the interpretability of predictions.
We tackle uncertainty estimation by introducing a Bayesian deep learning method to approximate the posterior distribution of the weights of a neural network using an implicit distribution. We verify that our method is capable of solving predictive tasks while providing reliable uncertainty estimates.
Moving on, we frame various medical prediction tasks within the framework of outlier detection. We apply deep generative modelling to brain MR and CT images as well as histopathology images and show that it is possible to detect pathologies as outliers under a normative model of healthy samples.
Next, we propose deep structural causal models as a framework capable of capturing causal relationships between imaging and non-imaging data. Our experiments provide evidence that this framework is capable of all rungs of the causal hierarchy.
Finally, with further thoughts on applications of uncertainty estimation, robust causal estimation, and fairness we conclude that the safe and reliable deployment of AI systems to real-world scenarios requires the integration of probabilistic and causal reasoning.Open Acces
Normative ascent with local gaussians for unsupervised lesion detection
Unsupervised abnormality detection is an appealing approach to identify patterns that are not present in training data without specific annotations for such patterns. In the medical imaging field, methods taking this approach have been proposed to detect lesions. The appeal of this approach stems from the fact that it does not require lesion-specific supervision and can potentially generalize to any sort of abnormal patterns. The principle is to train a generative model on images from healthy individuals to estimate the distribution of images of the normal anatomy, i.e., a normative distribution, and detect lesions as out-of-distribution regions. Restoration-based techniques that modify a given image by taking gradient ascent steps with respect to a posterior distribution composed of a normative distribution and a likelihood term recently yielded state-of-the-art results. However, these methods do not explicitly model ascent directions with respect to the normative distribution, i.e. normative ascent direction, which is essential for successful restoration. In this work, we introduce a novel approach for unsupervised lesion detection by modeling normative ascent directions. We present different modelling options based on the defined ascent directions with local Gaussians. We further extend the proposed method to efficiently utilize 3D information, which has not been explored in most existing works. We experimentally show that the proposed method provides higher accuracy in detection and produces more realistic restored images. The performance of the proposed method is evaluated against baselines on publicly available BRATS and ATLAS stroke lesion datasets; the detection accuracy of the proposed method surpasses the current state-of-the-art results.ISSN:1361-8415ISSN:1361-842