1,143 research outputs found

    Mathematical modeling and visualization of functional neuroimages

    Get PDF

    Explainable deep learning classifiers for disease detection based on structural brain MRI data

    Get PDF
    In dieser Doktorarbeit wird die Frage untersucht, wie erfolgreich deep learning bei der Diagnostik von neurodegenerativen Erkrankungen unterstützen kann. In 5 experimentellen Studien wird die Anwendung von Convolutional Neural Networks (CNNs) auf Daten der Magnetresonanztomographie (MRT) untersucht. Ein Schwerpunkt wird dabei auf die Erklärbarkeit der eigentlich intransparenten Modelle gelegt. Mit Hilfe von Methoden der erklärbaren künstlichen Intelligenz (KI) werden Heatmaps erstellt, die die Relevanz einzelner Bildbereiche für das Modell darstellen. Die 5 Studien dieser Dissertation zeigen das Potenzial von CNNs zur Krankheitserkennung auf neurologischen MRT, insbesondere bei der Kombination mit Methoden der erklärbaren KI. Mehrere Herausforderungen wurden in den Studien aufgezeigt und Lösungsansätze in den Experimenten evaluiert. Über alle Studien hinweg haben CNNs gute Klassifikationsgenauigkeiten erzielt und konnten durch den Vergleich von Heatmaps zur klinischen Literatur validiert werden. Weiterhin wurde eine neue CNN Architektur entwickelt, spezialisiert auf die räumlichen Eigenschaften von Gehirn MRT Bildern.Deep learning and especially convolutional neural networks (CNNs) have a high potential of being implemented into clinical decision support software for tasks such as diagnosis and prediction of disease courses. This thesis has studied the application of CNNs on structural MRI data for diagnosing neurological diseases. Specifically, multiple sclerosis and Alzheimer’s disease were used as classification targets due to their high prevalence, data availability and apparent biomarkers in structural MRI data. The classification task is challenging since pathology can be highly individual and difficult for human experts to detect and due to small sample sizes, which are caused by the high acquisition cost and sensitivity of medical imaging data. A roadblock in adopting CNNs to clinical practice is their lack of interpretability. Therefore, after optimizing the machine learning models for predictive performance (e.g. balanced accuracy), we have employed explainability methods to study the reliability and validity of the trained models. The deep learning models achieved good predictive performance of over 87% balanced accuracy on all tasks and the explainability heatmaps showed coherence with known clinical biomarkers for both disorders. Explainability methods were compared quantitatively using brain atlases and shortcomings regarding their robustness were revealed. Further investigations showed clear benefits of transfer-learning and image registration on the model performance. Lastly, a new CNN layer type was introduced, which incorporates a prior on the spatial homogeneity of neuro-MRI data. CNNs excel when used on natural images which possess spatial heterogeneity, and even though MRI data and natural images share computational similarities, the composition and orientation of neuro-MRI is very distinct. The introduced patch-individual filter (PIF) layer breaks the assumption of spatial invariance of CNNs and reduces convergence time on different data sets without reducing predictive performance. The presented work highlights many challenges that CNNs for disease diagnosis face on MRI data and defines as well as tests strategies to overcome those

    Identifying and reverting the adverse effects of white matter hyperintensities on cortical surface analyses

    Get PDF
    ありふれた脳の白質病変がMRI画像解析を悪化させていた --従来手法に機械学習を組み入れた改善手法の開発--. 京都大学プレスリリース. 2023-10-02.The Human Connectome Project (HCP)-style surface-based brain MRI analysis is a powerful technique that allows precise mapping of the cerebral cortex. However, the strength of its surface-based analysis has not yet been tested in the older population that often presents with white matter hyperintensities (WMHs) on T2-weighted (T2w) MRI (hypointensities on T1w MRI). We investigated T1-weighted (T1w) and T2w structural MRI in 43 healthy middle-aged to old participants. Juxtacortical WMHs were often misclassified by the default HCP pipeline as parts of the gray matter in T1w MRI, leading to incorrect estimation of the cortical surfaces and cortical metrics. To revert the adverse effects of juxtacortical WMHs, we incorporated the Brain Intensity AbNormality Classification Algorithm into the HCP pipeline (proposed pipeline). Blinded radiologists performed stereological quality control (QC) and found a decrease in the estimation errors in the proposed pipeline. The superior performance of the proposed pipeline was confirmed using an originally-developed automated surface QC based on a large database. Here we showed the detrimental effects of juxtacortical WMHs for estimating cortical surfaces and related metrics and proposed a possible solution for this problem. The present knowledge and methodology should help researchers identify adequate cortical surface biomarkers for aging and age-related neuropsychiatric disorders

    Machine Learning Methods for Medical and Biological Image Computing

    Get PDF
    Medical and biological imaging technologies provide valuable visualization information of structure and function for an organ from the level of individual molecules to the whole object. Brain is the most complex organ in body, and it increasingly attracts intense research attentions with the rapid development of medical and bio-logical imaging technologies. A massive amount of high-dimensional brain imaging data being generated makes the design of computational methods for efficient analysis on those images highly demanded. The current study of computational methods using hand-crafted features does not scale with the increasing number of brain images, hindering the pace of scientific discoveries in neuroscience. In this thesis, I propose computational methods using high-level features for automated analysis of brain images at different levels. At the brain function level, I develop a deep learning based framework for completing and integrating multi-modality neuroimaging data, which increases the diagnosis accuracy for Alzheimer’s disease. At the cellular level, I propose to use three dimensional convolutional neural networks (CNNs) for segmenting the volumetric neuronal images, which improves the performance of digital reconstruction of neuron structures. I design a novel CNN architecture such that the model training and testing image prediction can be implemented in an end-to-end manner. At the molecular level, I build a voxel CNN classifier to capture discriminative features of the input along three spatial dimensions, which facilitate the identification of secondary structures of proteins from electron microscopy im-ages. In order to classify genes specifically expressed in different brain cell-type, I propose to use invariant image feature descriptors to capture local gene expression information from cellular-resolution in situ hybridization images. I build image-level representations by applying regularized learning and vector quantization on generated image descriptors. The developed computational methods in this dissertation are evaluated using images from medical and biological experiments in comparison with baseline methods. Experimental results demonstrate that the developed representations, formulations, and algorithms are effective and efficient in learning from brain imaging data

    Kernel Methods for Machine Learning with Life Science Applications

    Get PDF

    Deep Interpretability Methods for Neuroimaging

    Get PDF
    Brain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Nevertheless, the difficulty of reliable training on high-dimensional but small-sample datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this dissertation, we address these challenges by proposing a deep learning framework to learn from high-dimensional dynamical data while maintaining stable, ecologically valid interpretations. The developed model is pre-trainable and alleviates the need to collect an enormous amount of neuroimaging samples to achieve optimal training. We also provide a quantitative validation module, Retain and Retrain (RAR), that can objectively verify the higher predictability of the dynamics learned by the model. Results successfully demonstrate that the proposed framework enables learning the fMRI dynamics directly from small data and capturing compact, stable interpretations of features predictive of function and dysfunction. We also comprehensively reviewed deep interpretability literature in the neuroimaging domain. Our analysis reveals the ongoing trend of interpretability practices in neuroimaging studies and identifies the gaps that should be addressed for effective human-machine collaboration in this domain. This dissertation also proposed a post hoc interpretability method, Geometrically Guided Integrated Gradients (GGIG), that leverages geometric properties of the functional space as learned by a deep learning model. With extensive experiments and quantitative validation on MNIST and ImageNet datasets, we demonstrate that GGIG outperforms integrated gradients (IG), which is considered to be a popular interpretability method in the literature. As GGIG is able to identify the contours of the discriminative regions in the input space, GGIG may be useful in various medical imaging tasks where fine-grained localization as an explanation is beneficial
    corecore