559 research outputs found

    Converting Neuroimaging Big Data to information: Statistical Frameworks for interpretation of Image Driven Biomarkers and Image Driven Disease Subtyping

    Get PDF
    Large scale clinical trials and population based research studies collect huge amounts of neuroimaging data. Machine learning classifiers can potentially use these data to train models that diagnose brain related diseases from individual brain scans. In this dissertation we address two distinct challenges that beset a wider adoption of these tools for diagnostic purposes. The first challenge that besets the neuroimaging based disease classification is the lack of a statistical inference machinery for highlighting brain regions that contribute significantly to the classifier decisions. In this dissertation, we address this challenge by developing an analytic framework for interpreting support vector machine (SVM) models used for neuroimaging based diagnosis of psychiatric disease. To do this we first note that permutation testing using SVM model components provides a reliable inference mechanism for model interpretation. Then we derive our analysis framework by showing that under certain assumptions, the permutation based null distributions associated with SVM model components can be approximated analytically using the data themselves. Inference based on these analytic null distributions is validated on real and simulated data. p-Values computed from our analysis can accurately identify anatomical features that differentiate groups used for classifier training. Since the majority of clinical and research communities are trained in understanding statistical p-values rather than machine learning techniques like the SVM, we hope that this work will lead to a better understanding SVM classifiers and motivate a wider adoption of SVM models for image based diagnosis of psychiatric disease. A second deficiency of learning based neuroimaging diagnostics is that they implicitly assume that, `a single homogeneous pattern of brain changes drives population wide phenotypic differences\u27. In reality it is more likely that multiple patterns of brain deficits drive the complexities observed in the clinical presentation of most diseases. Understanding this heterogeneity may allow us to build better classifiers for identifying such diseases from individual brain scans. However, analytic tools to explore this heterogeneity are missing. With this in view, we present in this dissertation, a framework for exploring disease heterogeneity using population neuroimaging data. The approach we present first computes difference images by comparing matched cases and controls and then clusters these differences. The cluster centers define a set of deficit patterns that differentiates the two groups. By allowing for more than one pattern of difference between two populations, our framework makes a radical departure from traditional tools used for neuroimaging group analyses. We hope that this leads to a better understanding of the processes that lead to disease and also that it ultimately leads to improved image based disease classifiers

    Imaging of epileptic activity using EEG-correlated functional MRI.

    Get PDF
    This thesis describes the method of EEG-correlated fMRI and its application to patients with epilepsy. First, an introduction on MRI and functional imaging methods in the field of epilepsy is provided. Then, the present and future role of EEG-correlated fMRI in the investigation of the epilepsies is discussed. The fourth chapter reviews the important practicalities of EEG-correlated fMRI that were addressed in this project. These included patient safety, EEG quality and MRI artifacts during EEG-correlated fMRI. Technical solutions to enable safe, good quality EEG recordings inside the MR scanner are presented, including optimisation of the EEG recording techniques and algorithms for the on-line subtraction of pulse and image artifact. In chapter five, a study applying spike-triggered fMRI to patients with focal epilepsy (n = 24) is presented. Using statistical parametric mapping (SPM), cortical Blood Oxygen Level-Dependent (BOLD) activations corresponding to the presumed generators of the interictal epileptiform discharges (IED) were identified in twelve patients. The results were reproducible in repeated experiments in eight patients. In the remaining patients no significant activation (n = 10) was present or the activation did not correspond to the presumed epileptic focus (n = 2). The clinical implications of this finding are discussed. In a second study it was demonstrated that in selected patients, individual (as opposed to averaged) IED could also be associated with hemodynamic changes detectable with fMRI. Chapter six gives examples of combination of EEG-correlated fMRI with other modalities to obtain complementary information on interictal epileptiform activity and epileptic foci. One study compared spike-triggered fMRI activation maps with EEG source analysis based on 64-channel scalp EEG recordings of interictal spikes using co-registration of both modalities. In all but one patient, source analysis solutions were anatomically concordant with the BOLD activation. Further, the combination of spike- triggered fMRI with diffusion tensor and chemical shift imaging is demonstrated in a patient with localisation-related epilepsy. In chapter seven, applications of EEG-correlated fMRI in different areas of neuroscience are discussed. Finally, the initial imaging findings with the novel technique for the simultaneous and continuous acquisition of fMRI and EEG data are presented as an outlook to future applications of EEG-correlated fMRI. In conclusion, the technical problems of both EEG-triggered fMRI and simultaneous EEG-correlated fMRI are now largely solved. The method has proved useful to provide new insights into the generation of epileptiform activity and other pathological and physiological brain activity. Currently, its utility in clinical epileptology remains unknown

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    Deep Learning for Medical Imaging in a Biased Environment

    Get PDF
    Deep learning (DL) based applications have successfully solved numerous problems in machine perception. In radiology, DL-based image analysis systems are rapidly evolving and show progress in guiding treatment decisions, diagnosing, localizing disease on medical images, and improving radiologists\u27 workflow. However, many DL-based radiological systems fail to generalize when deployed in new hospital settings, and the causes of these failures are not always clear. Although significant effort continues to be invested in applying DL algorithms to radiological data, many open questions and issues that arise from incomplete datasets remain. To bridge the gap, we first review the current state of artificial intelligence applied to radiology data, followed by juxtaposing the use of classical computer vision features (i.e., hand-crafted features) with the recent advances caused by deep learning. However, using DL is not an excuse for a lack of rigorous study design, which we demonstrate by proposing sanity tests that determine when a DL system is right for the wrong reasons. Having established the appropriate way to assess DL systems, we then turn to improve their efficacy and generalizability by leveraging prior information about human physiology and data derived from dual energy computed tomography scans. In this dissertation, we address the gaps in the radiology literature by introducing new tools, testing strategies, and methods to mitigate the influence of dataset biases
    • 

    corecore