27 research outputs found

    Is Texture Predictive for Age and Sex in Brain MRI?

    Get PDF
    Deep learning builds the foundation for many medical image analysis tasks where neuralnetworks are often designed to have a large receptive field to incorporate long spatialdependencies. Recent work has shown that large receptive fields are not always necessaryfor computer vision tasks on natural images. We explore whether this translates to certainmedical imaging tasks such as age and sex prediction from a T1-weighted brain MRI scans.Comment: MIDL 2019 [arXiv:1907.08612

    Improving Image-Based Precision Medicine with Uncertainty-Aware Causal Models

    Full text link
    Image-based precision medicine aims to personalize treatment decisions based on an individual's unique imaging features so as to improve their clinical outcome. Machine learning frameworks that integrate uncertainty estimation as part of their treatment recommendations would be safer and more reliable. However, little work has been done in adapting uncertainty estimation techniques and validation metrics for precision medicine. In this paper, we use Bayesian deep learning for estimating the posterior distribution over factual and counterfactual outcomes on several treatments. This allows for estimating the uncertainty for each treatment option and for the individual treatment effects (ITE) between any two treatments. We train and evaluate this model to predict future new and enlarging T2 lesion counts on a large, multi-center dataset of MR brain images of patients with multiple sclerosis, exposed to several treatments during randomized controlled trials. We evaluate the correlation of the uncertainty estimate with the factual error, and, given the lack of ground truth counterfactual outcomes, demonstrate how uncertainty for the ITE prediction relates to bounds on the ITE error. Lastly, we demonstrate how knowledge of uncertainty could modify clinical decision-making to improve individual patient and clinical trial outcomes

    Probabilistic and causal reasoning in deep learning for imaging

    Get PDF
    Typical machine learning research in the imaging domain occurs in clearly defined environments on clean datasets without considering realistic deployment scenarios. However, applied machine learning systems are exposed to unexpected distribution shifts and still need to produce reliable predictions without relying on spurious correlations. Similarly, such systems encounter ambiguous or unseen inputs and need to communicate their uncertainty. Often, AI systems support a human operator and should provide interpretable explanations of their decisions. This thesis argues for a probabilistic and causal approach to machine learning that is robust to spurious correlations, improves interpretability, and communicates uncertainty. First, we investigate the learning abilities of neural networks that are constrained to extracting information from image patches. We show that careful network design can prevent shortcut learning and that restricting the receptive field can improve the interpretability of predictions. We tackle uncertainty estimation by introducing a Bayesian deep learning method to approximate the posterior distribution of the weights of a neural network using an implicit distribution. We verify that our method is capable of solving predictive tasks while providing reliable uncertainty estimates. Moving on, we frame various medical prediction tasks within the framework of outlier detection. We apply deep generative modelling to brain MR and CT images as well as histopathology images and show that it is possible to detect pathologies as outliers under a normative model of healthy samples. Next, we propose deep structural causal models as a framework capable of capturing causal relationships between imaging and non-imaging data. Our experiments provide evidence that this framework is capable of all rungs of the causal hierarchy. Finally, with further thoughts on applications of uncertainty estimation, robust causal estimation, and fairness we conclude that the safe and reliable deployment of AI systems to real-world scenarios requires the integration of probabilistic and causal reasoning.Open Acces
    corecore