12 research outputs found

    Alzheimer's Disease: A Survey

    Get PDF
    Alzheimer's Diseases (AD) is one of the type of dementia. This is one of the harmful disease which can lead to death and yet there is no treatment. There is no current technique which is 100% accurate for the treatment of this disease. In recent years, Neuroimaging combined with machine learning techniques have been used for detection of Alzheimer's disease. Based on our survey we came across many methods like Convolution Neural Network (CNN) where in each brain area is been split into small three dimensional patches which acts as input samples for CNN. The other method used was Deep Neural Networks (DNN) where the brain MRI images are segmented to extract the brain chambers and then features are extracted from the segmented area. There are many such methods which can be used for detection of Alzheimer’s Disease

    Early Identification of Alzheimer’s Disease Using Medical Imaging: A Review From a Machine Learning Approach Perspective

    Get PDF
    Alzheimer’s disease (AD) is the leading cause of dementia in aged adults, affecting up to 70% of the dementia patients, and posing a serious public health hazard in the twenty-first century. AD is a progressive, irreversible and neuro-degenerative disease with a long pre-clinical period, affecting brain cells leading to memory loss, misperception, learning problems, and improper decisions. Given its significance, presently no treatment options are available, although disease advancement can be retarded through medication. Unfortunately, AD is diagnosed at a very later stage, after irreversible damages to the brain cells have occurred, when there is no scope to prevent further cognitive decline. The use of non-invasive neuroimaging procedures capable of detecting AD at preliminary stages is crucial for providing treatment retarding disease progression, and has stood as a promising area of research. We conducted a comprehensive assessment of papers employing machine learning to predict AD using neuroimaging data. Most of the studies employed brain images from Alzheimer’s disease neuroimaging initiative (ADNI) dataset, consisting of magnetic resonance image (MRI) and positron emission tomography (PET) images. The most widely used method, the support vector machine (SVM), has a mean accuracy of 75.4 percent, whereas convolutional neural networks(CNN) have a mean accuracy of 78.5 percent. Better classification accuracy has been achieved by combining MRI and PET, rather using single neuroimaging technique. Overall, more complicated models, like deep learning, paired with multimodal and multidimensional data (neuroimaging, cognitive, clinical, behavioral and genetic) produced superlative results. However, promising results have been achieved, still there is a room for performance improvement of the proposed methods, providing assistance to healthcare professionals and clinician

    Multimodal and multicontrast image fusion via deep generative models

    Full text link
    Recently, it has become progressively more evident that classic diagnostic labels are unable to reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses (e.g., depression, anxiety disorders, behavioral phenotypes). Patient heterogeneity can be better described by grouping individuals into novel categories based on empirically derived sections of intersecting continua that span across and beyond traditional categorical borders. In this context, neuroimaging data carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is because every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information which was not included in the embedding creation process. This may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and empowering clinical trials

    Machine learning-based prediction of mild cognitive impairment among individuals with normal cognitive function

    Get PDF
    BackgroundPrevious studies mainly focused on risk factors in patients with mild cognitive impairment (MCI) or dementia. The aim of the study was to provide basis for preventing MCI in cognitive normal populations.MethodsThe data came from a longitudinal retrospective study involving individuals with brain magnetic resonance imaging scans, clinical visits, and cognitive assessment with interval of more than 3 years. Multiple machine-learning technologies, including random forest, support vector machine, logistic regression, eXtreme Gradient Boosting, and naïve Bayes, were used to establish a prediction model of a future risk of MCI through a combination of clinical and image variables.ResultsAmong these machine learning models; eXtreme Gradient Boosting (XGB) was the best classification model. The classification accuracy of clinical variables was 65.90%, of image variables was 79.54%, of a combination of clinical and image variables was 94.32%. The best result of the combination was an accuracy of 94.32%, a precision of 96.21%, and a recall of 93.08%. XGB with a combination of clinical and image variables had a potential prospect for the risk prediction of MCI. From clinical perspective, the degree of white matter hyperintensity (WMH), especially in the frontal lobe, and the control of systolic blood pressure (SBP) were the most important risk factor for the development of MCI.ConclusionThe best MCI classification results came from the XGB model with a combination of both clinical and imaging variables. The degree of WMH in the frontal lobe and SBP control were the most important variables in predicting MCI

    A parameter-efficient deep learning approach to predict conversion from mild cognitive impairment to Alzheimer's disease

    No full text
    Some forms of mild cognitive impairment (MCI) are the clinical precursors of Alzheimer's disease (AD), while other MCI types tend to remain stable over-time and do not progress to AD. To identify and choose effective and personalized strategies to prevent or slow the progression of AD, we need to develop objective measures that are able to discriminate the MCI patients who are at risk of AD from those MCI patients who have less risk to develop AD. Here, we present a novel deep learning architecture, based on dual learning and an ad hoc layer for 3D separable convolutions, which aims at identifying MCI patients who have a high likelihood of developing AD within 3 years. Our deep learning procedures combine structural magnetic resonance imaging (MRI), demographic, neuropsychological, and APOe4 genetic data as input measures. The most novel characteristics of our machine learning model compared to previous ones are the following: 1) our deep learning model is multi-tasking, in the sense that it jointly learns to simultaneously predict both MCI to AD conversion as well as AD vs. healthy controls classification, which facilitates relevant feature extraction for AD prognostication; 2) the neural network classifier employs fewer parameters than other deep learning architectures which significantly limits data-overfitting (we use ∼550,000 network parameters, which is orders of magnitude lower than other network designs); 3) both structural MRI images and their warp field characteristics, which quantify local volumetric changes in relation to the MRI template, were used as separate input streams to extract as much information as possible from the MRI data. All analyses were performed on a subset of the database made publicly available via the Alzheimer's Disease Neuroimaging Initiative (ADNI), (n = 785 participants, n = 192 AD patients, n = 409 MCI patients (including both MCI patients who convert to AD and MCI patients who do not covert to AD), and n = 184 healthy controls). The most predictive combination of inputs were the structural MRI images and the demographic, neuropsychological, and APOe4 data. In contrast, the warp field metrics were of little added predictive value. The algorithm was able to distinguish the MCI patients developing AD within 3 years from those patients with stable MCI over the same time-period with an area under the curve (AUC) of 0.925 and a 10-fold cross-validated accuracy of 86%, a sensitivity of 87.5%, and specificity of 85%. To our knowledge, this is the highest performance achieved so far using similar datasets. The same network provided an AUC of 1 and 100% accuracy, sensitivity, and specificity when classifying patients with AD from healthy controls. Our classification framework was also robust to the use of different co-registration templates and potentially irrelevant features/image portions. Our approach is flexible and can in principle integrate other imaging modalities, such as PET, and diverse other sets of clinical data. The convolutional framework is potentially applicable to any 3D image dataset and gives the flexibility to design a computer-aided diagnosis system targeting the prediction of several medical conditions and neuropsychiatric disorders via multi-modal imaging and tabular clinical data

    Predicción del diagnóstico de la enfermedad de Alzheimer mediante deep-learning en imágenes 18F-FDG PET

    Get PDF
    La enfermedad de Alzheimer es una enfermedad neurodegenerativa que afectaa más de 50 millones de personas en todo el mundo. Es la forma más común dedemencia, con un 60-70% de los casos. Actualmente no existe una cura efectiva paraella, aunque sí existen algunos tratamientos que pueden ser eficaces si se aplican enlas fases tempranas de la enfermedad, permitiendo retrasar su evolución. Por ello, undiagnóstico preciso y con suficiente antelación es fundamental para poder tomarmedidas preventivas. El gran auge del deep-learning en los últimos años ha permitidoel desarrollo de diferentes sistemas de predicción que ayuden al diagnóstico de laenfermedad de Alzheimer a partir de imágenes cerebrales.El principal objetivo de este Trabajo de Fin de Grado es el desarrollo de unsistema de aprendizaje profundo basado en redes neuronales convolucionales que, apartir de imágenes 18F-FDG PET del cerebro sea capaz de predecir el diagnóstico finalentre pacientes enfermos (AD), con deterioro cognitivo leve (MCI) o cognitivamentenormales (CN). La obtención de las imágenes para el entrenamiento y test de la red sehan obtenido del repositorio de la Alzheimer's Disease Neuroimaging Initiative (ADNI).Se han desarrollado dos sistemas con dos arquitecturas diferentes: la originalpropuesta en (Ding et al., ,2019) y una mejora posterior de la misma propuesta en la literatura en uncontexto diferente. Las imágenes utilizadas son 3D mientras que las arquitecturasutilizadas se basan en convoluciones 2D. Por este motivo, las imágenes de 18F-FDG PEThan sido preprocesadas antes de ser cargadas en la red. Para el entrenamiento de lossistemas se ha hecho uso de las técnicas de transfer-learning y fine-tuning. Laimplementación del sistema y el preprocesado de las imágenes se ha realizado enPython 3.6.9, mediante el uso de las librerías de Keras (versión 2.2.4) y TensorFlow(versión 1.12.0). El entrenamiento y test de la red se ha realizado sobre una tarjetagráfica Titan RTX de 24 GBs de VRAM.Los experimentos realizados muestran que, ambos sistemas desarrolladospueden llegar a predecir AD hasta 66 meses (5 años y medio) antes del diagnósticofinal. El sistema basado en la arquitectura propuesta en (Ding et al., ,2019) es capaz de predecir eldiagnóstico final de Alzheimer con una precisión del 77.0% y un AUC de 0.84. Se haencontrado que el sistema entrenado con los pacientes de AD y CN es capaz dediagnosticar la enfermedad con una precisión del 87.5% y un AUC de 0.97 y se haanalizado cómo afecta en el rendimiento del sistema la introducción de datos depacientes con MCI. Con la arquitectura más moderna se ha conseguido mejorar losresultados con una precisión de 84.6% y un AUC de 0.89 en la predicción deldiagnóstico final de Alzheimer. Finalmente, se han realizado distintos análisis de lasredes neuronales convolucionales desarrolladas para comprender los puntos fuertes ydébiles de los modelos obtenidos.<br /

    Novel Deep Learning Models for Medical Imaging Analysis

    Get PDF
    abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Multi-Modal Magnetic Resonance Imaging Predicts Regional Amyloid Burden in the Brain

    Get PDF
    Alzheimer’s disease (AD) is the most common cause of dementia and identifying early markers of this disease is important for prevention and treatment strategies. Amyloid- β (Aβ) protein deposition is one of the earliest detectable pathological changes in AD. But in-vivo detection of Aβ using positron emission tomography (PET) is hampered by high cost and limited geographical accessibility. These factors can become limiting when PET is used to screen large numbers of subjects into prevention trials when only a minority are expected to be amyloid-positive. Structural MRI is advantageous; as it is non-invasive, relatively inexpensive and more accessible. Thus it could be widely used in large studies, even when frequent or repetitive imaging is necessary. We used a machine learning, pattern recognition, approach using intensity-based features from individual and combination of MR modalities (T1 weighted, T2 weighted, T2 fluid attenuated inversion recovery [FLAIR], susceptibility weighted imaging) to predict voxel-level amyloid in the brain. The MR- Aβ relation was learned within each subject and generalized across subjects using subject–specific features (demographic, clinical, and summary MR features). When compared to other modalities, combination of T1-weighted, T2-weighted FLAIR, and SWI performed best in predicting the amyloid status as positive or negative. A combination of T2-weighted and SWI imaging performed the best in predicting change in amyloid over two timepoints. Overall, our results show feasibility of amyloid prediction by MRI and its potential use as an amyloid-screening tool
    corecore