6 research outputs found

    Diagnosis of Neurodegenerative Diseases using Deep Learning

    Get PDF
    Automated disease classification systems can assist radiologists by reducing workload while initiating therapy to slow disease progression and improve patients’ quality of life. With significant advances in machine learning (ML) and medical scanning over the last decade, medical image analysis has experienced a paradigm change. Deep learning (DL) employing magnetic resonance imaging (MRI) has become a prominent method for computer-assisted systems because of its ability to extract high-level features via local connection, weight sharing, and spatial invariance. Nonetheless, there are several important research challenges when advancing toward clinical application, and these problems inspire the contributions presented throughout this thesis. This research develops a framework for the classification of neurodegenerative diseases using DL techniques and MRI. The presented thesis involves three evolution stages. The first stage is the development of a robust and reproducible 2D classification system with high generalisation performance for Alzheimer’s disease (AD), mild cognitive impairment (MCI), and Parkinson’s disease (PD) using deep convolutional neural networks (CNN). The next phase of the first stage extends this framework and demonstrates its use on different datasets while quantifying the effect of a highly observed phenomenon called data leakage in the literature. Key contributions of the thesis presented in this stage are a thorough analysis of the literature, a discussion on the potential flaws of the selected studies, and the development of an open-source evaluation system for neurodegenerative disease classification using structural MRI. The second stage aims to overcome the problems stem from investigating 3D data with 2D models. With this goal, a 3D CNN-based diagnostic framework is developed for classifying AD and PD patients from healthy controls using T1-weighted brain MRI data. The last stage includes two phases with a focus on AD and MCI diagnosis. The first phase proposes a new autoencoder-based deep neural network structure by integrating supervised prediction and unsupervised representation. The second phase introduces the final contribution of the thesis which is a novel ensemble approach that may also be used to predict diseases other than neurodegenerative ones (e.g., tuberculosis (TB)) using a modality apart from MRI

    Generalization Performance of the Deep Learning Models in Neurodegenerative Disease Classification.

    Get PDF
    Over the past decade, machine learning gained considerable attention from the scientific community and has progressed rapidly as a result. Given its ability to detect subtle and complicated patterns, deep learning (DL) has been utilized widely in neuroimaging studies for medical data analysis and automated diagnostics with varying degrees of success. In this paper, we question the remarkable accuracies of the best performing models by assessing generalization performance of the stateof-the-art convolutional neural network (CNN) models on the classification of two most common neurodegenerative diseases, namely Alzheimer’s Disease (AD) and Parkinson’s Disease (PD) using MRI. We demonstrate the impact of the data division strategy on the model performances by comparing the results derived from two different split approaches. We first evaluated the performance of the CNN models by dividing the dataset at the subject level in which all of the MRI slices of a patient are put into either training or test set. We then observed that pooling together all slices prior to applying cross-validation, as erroneously done in a number of previous studies, leads to inflated accuracies by as much as 26% for the classification of the diseases

    Convolutional Autoencoder based Deep Learning Approach for Alzheimer's Disease Diagnosis using Brain MRI

    Get PDF
    Rapid and accurate diagnosis of Alzheimer's disease (AD) is critical for patient treatment, especially in the early stages of the disease. While computer-assisted diagnosis based on neuroimaging holds vast potential for helping clinicians detect disease sooner, there are still some technical hurdles to overcome. This study presents an end-to-end disease detection approach using convolutional autoencoders by integrating supervised prediction and unsupervised representation. The 2D neural network is based upon a pre-trained 2D convolutional autoencoder to capture latent representations in structural brain magnetic resonance imaging (MRI) scans. Experiments on the OASIS brain MRI dataset revealed that the model outperforms a number of traditional classifiers in terms of accuracy using a single slice

    3D Convolutional Neural Networks for Diagnosis of Alzheimer’s Disease via structural MRI

    Get PDF
    Alzheimer’s Disease (AD) is a widespread neurodegenerative disease caused by structural changes in the brain and leads to deterioration of cognitive functions. Patients usually experience diagnostic symptoms at later stages after irreversible neural damage occurs. Early detection of AD is crucial in maximizing patients' quality of life and to start treatments to decelerate the progress of the disease. Early detection may be possible via computer-assisted systems using neuroimaging data. Among all, deep learning utilizing magnetic resonance imaging (MRI) have become a prominent tool due to its capability to extract high-level features through local connectivity, weight sharing, and spatial invariance. This paper describes our investigation of the classification accuracy based on two publicly available data sets, namely, ADNI and OASIS, by building a 3D VGG variant convolutional network (CNN). We used 3D models to avoid information loss, which occurs during the process of slicing 3D MRI into 2D images and analyzing them by 2D convolutional filters. We also conducted a pre-processing of the data to enhance the effectiveness and classification performance of the model. The proposed model achieved 73.4% classification accuracy on ADNI and 69.9% on OASIS dataset with 5-fold cross-validation (CV). These results are comparable to other studies using various convolutional models. However, our subject-based divided dataset has only one MRI of a single patient to prevent possible data leakage whereas some other studies have different screenings of the same patients "over a time period'" in their datasets

    Deep Learning in Neuroimaging: Effect of Data Leakage in Cross-validation Using 2D Convolutional Neural Networks

    Get PDF
    In recent years, 2D convolutional neural networks (CNNs) have been extensively used to diagnose neurological diseases from magnetic resonance imaging (MRI) data due to their potential to discern subtle and intricate patterns. Despite the high performances reported in numerous studies, developing CNN models with good generalization abilities is still a challenging task due to possible data leakage introduced during cross-validation (CV). In this study, we quantitatively assessed the effect of a data leakage caused by 3D MRI data splitting based on a 2D slice-level using three 2D CNN models to classify patients with Alzheimer’s disease (AD) and Parkinson’s disease (PD). Our experiments showed that slice-level CV erroneously boosted the average slice level accuracy on the test set by 30% on Open Access Series of Imaging Studies (OASIS), 29% on Alzheimer’s Disease Neuroimaging Initiative (ADNI), 48% on Parkinson’s Progression Markers Initiative (PPMI) and 55% on a local de-novo PD Versilia dataset. Further tests on a randomly labeled OASIS-derived dataset produced about 96% of (erroneous) accuracy (slice-level split) and 50% accuracy (subject-level split), as expected from a randomized experiment. Overall, the extent of the effect of an erroneous slice-based CV is severe, especially for small datasets

    Ensemble Deep Learning Architectures for Automated Diagnosis of Pulmonary Tuberculosis using Chest X-ray

    No full text
    Tuberculosis (TB) is still a serious public health concern across the world, causing 1.4 million deaths each year. However, there has been a scarcity of radiological interpretation skills in many TB-infected locations, which may cause poor diagnosis rates and poor patient outcomes. A cost-effective and efficient automated technique might help screening evaluations in underprivileged countries and provide early illness diagnosis. In this work, we proposed a deep ensemble learning framework that integrates multisource data of two deep learning-based techniques for the automated diagnosis of TB. The integrated model framework has been tested on two publicly available datasets and one private dataset. While both proposed deep learning-based automated detection systems have shown high accuracy and specificity compared to state-of-the-art, the en- semble method significantly improved prediction accuracy in detecting chest radiographs with active pulmonary TB from a multi-ethnic patient cohort. Extensive experiments were used to validate the methodology, and the results were superior to previous approaches, showing the method’s practicality for application in the real world. By integrating supervised prediction and unsupervised representation, the ensemble method accu- rately classified TB with the area under the receiver operating characteristic (AUROC) up to 0.98 using chest radiography outperforming the other tested classifiers and achieving state- of-the-art. The methodology and findings provide a viable route for more accurate and quicker TB detection, especially in low and middle-income nations. </p
    corecore