45 research outputs found

    A cross-sectional study of explainable machine learning in Alzheimer’s disease: diagnostic classification using MR radiomic features

    Get PDF
    IntroductionAlzheimer’s disease (AD) even nowadays remains a complex neurodegenerative disease and its diagnosis relies mainly on cognitive tests which have many limitations. On the other hand, qualitative imaging will not provide an early diagnosis because the radiologist will perceive brain atrophy on a late disease stage. Therefore, the main objective of this study is to investigate the necessity of quantitative imaging in the assessment of AD by using machine learning (ML) methods. Nowadays, ML methods are used to address high dimensional data, integrate data from different sources, model the etiological and clinical heterogeneity, and discover new biomarkers in the assessment of AD.MethodsIn this study radiomic features from both entorhinal cortex and hippocampus were extracted from 194 normal controls (NC), 284 mild cognitive impairment (MCI) and 130 AD subjects. Texture analysis evaluates statistical properties of the image intensities which might represent changes in MRI image pixel intensity due to the pathophysiology of a disease. Therefore, this quantitative method could detect smaller-scale changes of neurodegeneration. Then the radiomics signatures extracted by texture analysis and baseline neuropsychological scales, were used to build an XGBoost integrated model which has been trained and integrated.ResultsThe model was explained by using the Shapley values produced by the SHAP (SHapley Additive exPlanations) method. XGBoost produced a f1-score of 0.949, 0.818, and 0.810 between NC vs. AD, MC vs. MCI, and MCI vs. AD, respectively.DiscussionThese directions have the potential to help to the earlier diagnosis and to a better manage of the disease progression and therefore, develop novel treatment strategies. This study clearly showed the importance of explainable ML approach in the assessment of AD

    Convolutional neural networks for the segmentation of small rodent brain MRI

    Get PDF
    Image segmentation is a common step in the analysis of preclinical brain MRI, often performed manually. This is a time-consuming procedure subject to inter- and intra- rater variability. A possible alternative is the use of automated, registration-based segmentation, which suffers from a bias owed to the limited capacity of registration to adapt to pathological conditions such as Traumatic Brain Injury (TBI). In this work a novel method is developed for the segmentation of small rodent brain MRI based on Convolutional Neural Networks (CNNs). The experiments here presented show how CNNs provide a fast, robust and accurate alternative to both manual and registration-based methods. This is demonstrated by accurately segmenting three large datasets of MRI scans of healthy and Huntington disease model mice, as well as TBI rats. MU-Net and MU-Net-R, the CCNs here presented, achieve human-level accuracy while eliminating intra-rater variability, alleviating the biases of registration-based segmentation, and with an inference time of less than one second per scan. Using these segmentation masks I designed a geometric construction to extract 39 parameters describing the position and orientation of the hippocampus, and later used them to classify epileptic vs. non-epileptic rats with a balanced accuracy of 0.80, five months after TBI. This clinically transferable geometric approach detects subjects at high-risk of post-traumatic epilepsy, paving the way towards subject stratification for antiepileptogenesis studies

    Structural gray matter features and behavioral preliterate skills predict future literacy – A machine learning approach

    Get PDF
    When children learn to read, their neural system undergoes major changes to become responsive to print. There seem to be nuanced interindividual differences in the neurostructural anatomy of regions that later become integral parts of the reading network. These differences might affect literacy acquisition and, in some cases, might result in developmental disorders like dyslexia. Consequently, the main objective of this longitudinal study was to investigate those interindividual differences in gray matter morphology that might facilitate or hamper future reading acquisition. We used a machine learning approach to examine to what extent gray matter macrostructural features and cognitive-linguistic skills measured before formal literacy teaching could predict literacy 2 years later. Forty-two native German-speaking children underwent T1-weighted magnetic resonance imaging and psychometric testing at the end of kindergarten. They were tested again 2 years later to assess their literacy skills. A leave-one-out cross-validated machine-learning regression approach was applied to identify the best predictors of future literacy based on cognitive-linguistic preliterate behavioral skills and cortical measures in a priori selected areas of the future reading network. With surprisingly high accuracy, future literacy was predicted, predominantly based on gray matter volume in the left occipito-temporal cortex and local gyrification in the left insular, inferior frontal, and supramarginal gyri. Furthermore, phonological awareness significantly predicted future literacy. In sum, the results indicate that the brain morphology of the large-scale reading network at a preliterate age can predict how well children learn to read

    Computational Intelligence in Healthcare

    Get PDF
    The number of patient health data has been estimated to have reached 2314 exabytes by 2020. Traditional data analysis techniques are unsuitable to extract useful information from such a vast quantity of data. Thus, intelligent data analysis methods combining human expertise and computational models for accurate and in-depth data analysis are necessary. The technological revolution and medical advances made by combining vast quantities of available data, cloud computing services, and AI-based solutions can provide expert insight and analysis on a mass scale and at a relatively low cost. Computational intelligence (CI) methods, such as fuzzy models, artificial neural networks, evolutionary algorithms, and probabilistic methods, have recently emerged as promising tools for the development and application of intelligent systems in healthcare practice. CI-based systems can learn from data and evolve according to changes in the environments by taking into account the uncertainty characterizing health data, including omics data, clinical data, sensor, and imaging data. The use of CI in healthcare can improve the processing of such data to develop intelligent solutions for prevention, diagnosis, treatment, and follow-up, as well as for the analysis of administrative processes. The present Special Issue on computational intelligence for healthcare is intended to show the potential and the practical impacts of CI techniques in challenging healthcare applications

    DIAGNOSTICS OF DEMENTIA FROM STRUCTURAL AND FUNCTIONAL MARKERS OF BRAIN ATROPHY WITH MACHINE LEARNING

    Get PDF
    Dementia is a condition in which higher mental functions are disrupted. It currently affects an estimated 57 million people throughout the world. A dementia diagnosis is difficult since neither anatomical indicators nor functional testing is currently sufficiently sensitive or specific. There remains a long list of outstanding issues that must be addressed. First, multimodal diagnosis has yet to be introduced into the early stages of dementia screening. Second, there is no accurate instrument for predicting the progression of pre-dementia. Third, non-invasive testing cannot be used to provide differential diagnoses. By creating ML models of normal and accelerated brain aging, we intend to better understand brain development. The combined analysis of distinct imaging and functional modalities will improve diagnostics of accelerated decline with advanced data science techniques, which is the main objective of our study. Hypothetically, an association between brain structural changes and cognitive performance differs between normal and accelerated aging. We propose using brain MRI scans to estimate the cognitive status of the cognitively preserved examinee and develop a structure-function model with machine learning (ML). Accelerated ageing is suspected when a scanned individual’s findings do not align with the usual paradigm. We calculate the deviation from the model of normal ageing (DMNA) as the error of cognitive score prediction. Then the obtained data may be compared with the results of conducted cognitive tests. The greater the difference between the expected and observed values, the greater the risk of dementia. DMNA can discern between cognitively normal and mild cognitive impairment (MCI) patients. The model was proven to perform well in the MCI-versus-Alzheimer’s disease (AD) categorization. DMNA is a potential diagnostic marker of dementia and its types

    Techniques for Analysis and Motion Correction of Arterial Spin Labelling (ASL) Data from Dementia Group Studies

    Get PDF
    This investigation examines how Arterial Spin Labelling (ASL) Magnetic Resonance Imaging can be optimised to assist in the early diagnosis of diseases which cause dementia, by considering group study analysis and control of motion artefacts. ASL can produce quantitative cerebral blood flow maps noninvasively - without a radioactive or paramagnetic contrast agent being injected. ASL studies have already shown perfusion changes which correlate with the metabolic changes measured by Positron Emission Tomography in the early stages of dementia, before structural changes are evident. But the clinical use of ASL for dementia diagnosis is not yet widespread, due to a combination of a lack of protocol consistency, lack of accepted biomarkers, and sensitivity to motion artefacts. Applying ASL to improve early diagnosis of dementia may allow emerging treatments to be administered earlier, thus with greater effect. In this project, ASL data acquired from two separate patient cohorts ( (i) Young Onset Alzheimer’s Disease (YOAD) study, acquired at Queen Square; and (ii) Incidence and RISk of dementia (IRIS) study, acquired in Rotterdam) were analysed using a pipeline optimised for each acquisition protocol, with several statistical approaches considered including support-vector machine learning. Machine learning was also applied to improve the compatibility of the two studies, and to demonstrate a novel method to disentangle perfusion changes measured by ASL from grey matter atrophy. Also in this project, retrospective motion correction techniques for specific ASL sequences were developed, based on autofocusing and exploiting parallel imaging algorithms. These were tested using a specially developed simulation of the 3D GRASE ASL protocol, which is capable of modelling motion. The parallel imaging based approach was verified by performing a specifically designed MRI experiment involving deliberate motion, then applying the algorithm to demonstrably reduce motion artefacts retrospectively

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic
    corecore