4,014 research outputs found

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Diffeomorphic registration using geodesic shooting and Gauss-Newton optimisation

    Get PDF
    This paper presents a nonlinear image registration algorithm based on the setting of Large Deformation Diffeomorphic Metric Mapping (LDDMM). but with a more efficient optimisation scheme - both in terms of memory required and the number of iterations required to reach convergence. Rather than perform a variational optimisation on a series of velocity fields, the algorithm is formulated to use a geodesic shooting procedure, so that only an initial velocity is estimated. A Gauss-Newton optimisation strategy is used to achieve faster convergence. The algorithm was evaluated using freely available manually labelled datasets, and found to compare favourably with other inter-subject registration algorithms evaluated using the same data. (C) 2011 Elsevier Inc. All rights reserved

    Deep MR to CT Synthesis for PET/MR Attenuation Correction

    Get PDF
    Positron Emission Tomography - Magnetic Resonance (PET/MR) imaging combines the functional information from PET with the flexibility of MR imaging. It is essential, however, to correct for photon attenuation when reconstructing PETs, which is challenging for PET/MR as neither modality directly image tissue attenuation properties. Classical MR-based computed tomography (CT) synthesis methods, such as multi-atlas propagation, have been the method of choice for PET attenuation correction (AC), however, these methods are slow and suffer from the poor ability to handle anatomical abnormalities. To overcome this limitation, this thesis explores the rising field of artificial intelligence in order to develop novel methods for PET/MR AC. Deep learning-based synthesis methods such as the standard U-Net architecture are not very stable, accurate, and robust to small variations in image appearance. Thus, the first proposed MR to CT synthesis method deploys a boosting strategy, where multiple weak predictors build a strong predictor providing a significant improvement in CT and PET reconstruction accuracy. Standard deep learning-based methods as well as more advanced methods like the first proposed method show issues in the presence of very complex imaging environments and large images such as whole-body images. The second proposed method learns the image context between whole-body MRs and CTs through multiple resolutions while simultaneously modelling uncertainty. Lastly, as the purpose of synthesizing a CT is to better reconstruct PET data, the use of CT-based loss functions is questioned within this thesis. Such losses fail to recognize the main objective of MR-based AC, which is to generate a synthetic CT that, when used for PET AC, makes the reconstructed PET as close as possible to the gold standard PET. The third proposed method introduces a novel PET-based loss that minimizes CT residuals with respect to the PET reconstruction

    Deep learning-based improvement for the outcomes of glaucoma clinical trials

    Get PDF
    Glaucoma is the leading cause of irreversible blindness worldwide. It is a progressive optic neuropathy in which retinal ganglion cell (RGC) axon loss, probably as a consequence of damage at the optic disc, causes a loss of vision, predominantly affecting the mid-peripheral visual field (VF). Glaucoma results in a decrease in vision-related quality of life and, therefore, early detection and evaluation of disease progression rates is crucial in order to assess the risk of functional impairment and to establish sound treatment strategies. The aim of my research is to improve glaucoma diagnosis by enhancing state of the art analyses of glaucoma clinical trial outcomes using advanced analytical methods. This knowledge would also help better design and analyse clinical trials, providing evidence for re-evaluating existing medications, facilitating diagnosis and suggesting novel disease management. To facilitate my objective methodology, this thesis provides the following contributions: (i) I developed deep learning-based super-resolution (SR) techniques for optical coherence tomography (OCT) image enhancement and demonstrated that using super-resolved images improves the statistical power of clinical trials, (ii) I developed a deep learning algorithm for segmentation of retinal OCT images, showing that the methodology consistently produces more accurate segmentations than state-of-the-art networks, (iii) I developed a deep learning framework for refining the relationship between structural and functional measurements and demonstrated that the mapping is significantly improved over previous techniques, iv) I developed a probabilistic method and demonstrated that glaucomatous disc haemorrhages are influenced by a possible systemic factor that makes both eyes bleed simultaneously. v) I recalculated VF slopes, using the retinal never fiber layer thickness (RNFLT) from the super-resolved OCT as a Bayesian prior and demonstrated that use of VF rates with the Bayesian prior as the outcome measure leads to a reduction in the sample size required to distinguish treatment arms in a clinical trial

    A proposal for a coordinated effort for the determination of brainwide neuroanatomical connectivity in model organisms at a mesoscopic scale

    Get PDF
    In this era of complete genomes, our knowledge of neuroanatomical circuitry remains surprisingly sparse. Such knowledge is however critical both for basic and clinical research into brain function. Here we advocate for a concerted effort to fill this gap, through systematic, experimental mapping of neural circuits at a mesoscopic scale of resolution suitable for comprehensive, brain-wide coverage, using injections of tracers or viral vectors. We detail the scientific and medical rationale and briefly review existing knowledge and experimental techniques. We define a set of desiderata, including brain-wide coverage; validated and extensible experimental techniques suitable for standardization and automation; centralized, open access data repository; compatibility with existing resources, and tractability with current informatics technology. We discuss a hypothetical but tractable plan for mouse, additional efforts for the macaque, and technique development for human. We estimate that the mouse connectivity project could be completed within five years with a comparatively modest budget.Comment: 41 page

    MAGNIMS recommendations for harmonization of MRI data in MS multicenter studies

    Get PDF
    Harmonization; MRI; Multiple sclerosisHarmonització; Ressonància magnètica; Esclerosi múltipleArmonización; Resonancia magnética; Esclerosis múltipleThere is an increasing need of sharing harmonized data from large, cooperative studies as this is essential to develop new diagnostic and prognostic biomarkers. In the field of multiple sclerosis (MS), the issue has become of paramount importance due to the need to translate into the clinical setting some of the most recent MRI achievements. However, differences in MRI acquisition parameters, image analysis and data storage across sites, with their potential bias, represent a substantial constraint. This review focuses on the state of the art, recent technical advances, and desirable future developments of the harmonization of acquisition, analysis and storage of large-scale multicentre MRI data of MS cohorts. Huge efforts are currently being made to achieve all the requirements needed to provide harmonized MRI datasets in the MS field, as proper management of large imaging datasets is one of our greatest opportunities and challenges in the coming years. Recommendations based on these achievements will be provided here. Despite the advances that have been made, the complexity of these tasks requires further research by specialized academical centres, with dedicated technical and human resources. Such collective efforts involving different professional figures are of crucial importance to offer to MS patients a personalised management while minimizing consumption of resources
    • …
    corecore