70 research outputs found

    Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and Other Tasks.

    Get PDF
    Deep learning models for MRI classification face two recurring problems: they are typically limited by low sample size, and are abstracted by their own complexity (the "black box problem"). In this paper, we train a convolutional neural network (CNN) with the largest multi-source, functional MRI (fMRI) connectomic dataset ever compiled, consisting of 43,858 datapoints. We apply this model to a cross-sectional comparison of autism spectrum disorder (ASD) versus typically developing (TD) controls that has proved difficult to characterize with inferential statistics. To contextualize these findings, we additionally perform classifications of gender and task versus rest. Employing class-balancing to build a training set, we trained [Formula: see text] modified CNNs in an ensemble model to classify fMRI connectivity matrices with overall AUROCs of 0.6774, 0.7680, and 0.9222 for ASD versus TD, gender, and task versus rest, respectively. Additionally, we aim to address the black box problem in this context using two visualization methods. First, class activation maps show which functional connections of the brain our models focus on when performing classification. Second, by analyzing maximal activations of the hidden layers, we were also able to explore how the model organizes a large and mixed-center dataset, finding that it dedicates specific areas of its hidden layers to processing different covariates of data (depending on the independent variable analyzed), and other areas to mix data from different sources. Our study finds that deep learning models that distinguish ASD from TD controls focus broadly on temporal and cerebellar connections, with a particularly high focus on the right caudate nucleus and paracentral sulcus

    Computer-Aided Diagnosis in Neuroimaging

    Get PDF
    This chapter is intended to provide an overview to the most used methods for computer-aided diagnosis in neuroimaging and its application to neurodegenerative diseases. The fundamental preprocessing steps, and how they are applied to different image modalities, will be thoroughly presented. We introduce a number of widely used neuroimaging analysis algorithms, together with a wide overview on the recent advances in brain imaging processing. Finally, we provide a general conclusion on the state of the art in brain imaging processing and possible future developments

    Ensembles of Deep Learning Architectures for the Early Diagnosis of the Alzheimer’s Disease.

    Get PDF
    Computer Aided Diagnosis (CAD) constitutes an important tool for the early diagnosis of Alzheimer’s Disease (AD), which, in turn, allows the application of treatments that can be simpler and more likely to be effective. This paper explores the construction of classification methods based on deep learning architectures applied on brain regions defined by the Automated Anatomical Labeling (AAL). Gray Matter (GM) images from each brain area have been split into 3D patches according to the regions defined by the AAL atlas and these patches are used to train different deep belief networks. An ensemble of deep belief networks is then composed where the final prediction is determined by a voting scheme. Two deep learning based structures and four different voting schemes are implemented and compared, giving as a result a potent classification architecture where discriminative features are computed in an unsupervised fashion. The resulting method has been evaluated using a large dataset from the Alzheimer’s disease Neuroimaging Initiative (ADNI). Classification results assessed by cross-validation prove that the proposed method is not only valid for differentiate between controls (NC) and AD images, but it also provides good performances when tested for the more challenging case of classifying Mild Cognitive Impairment (MCI) Subjects. In particular, the classification architecture provides accuracy values up to 0.90 and AUC of 0.95 for NC/AD classification, 0.84 and AUC of 0.91 for stable MCI/AD classification and 0.83 and AUC of 0.95 for NC/MCI converters classification.This work was partly supported by the MICINN un der the projects TEC2012-34306 and PSI2015-65848- R, and the Consejer´ıa de Innovaci´on, Ciencia y Em presa (Junta de Andaluc´ıa, Spain) under the Ex cellence Projects P09-TIC-4530, P11-TIC-7103 and the Universidad de M´alaga. Programa de fortalec imiento de las capacidades de I+D+I en las Uni versidades 2014-2015, de la Consejer´ıa de Econom´ıa, Innovaci´on, Ciencia y Empleo, cofinanciado por el fondo europeo de desarrollo regional (FEDER) un der the project FC14-SAF30. Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Ini tiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bio engineering, and through generous contributions from the following: AbbVie, Alzheimer’s Associa tion; Alzheimer’s Drug Discovery Foundation; Ara clon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; Eu roImmun; F. Hoffmann-La Roche Ltd and its affili ated company Genentech, Inc.; Fujirebio; GE Health care; IXICO Ltd.; Janssen Alzheimer Immunother apy Research & Development, LLC.;Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity ; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Re search is providing funds to support ADNI clinical sites in Canada. Private sector contributions are fa cilitated by the Foundation for the National Insti tutes of Health (www.fnih.org). The grantee organi zation is the Northern California Institute for Re search and Education, and the study is coordinated by the Alzheimer’s Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California

    Enhancing Multimodal Patterns in Neuroimaging by Siamese Neural Networks with Self-Attention Mechanism.

    Get PDF
    The combination of different sources of information is currently one of the most relevant aspects in the diagnostic process of several diseases. In the field of neurological disorders, different imaging modalities providing structural and functional information are frequently available. Those modalities are usually analyzed separately, although a joint of the features extracted from both sources can improve the classification performance of Computer-aided diagnosis (CAD) tools. Previous studies have computed independent models from each individual modality and combined then in a subsequent stage, which is not an optimum solution. In this work, we propose a method based on the principles of siamese neural networks to fuse information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). This framework quantifies the similarities between both modalities and relates them with the diagnostic label during the training process. The resulting latent space at the output of this network is then entered into an attention module in order to evaluate the relevance of each brain region and modality at different stages of the development of Alzheimer’s disease. The excellent results obtained and the high flexibility of the method proposed allows fusing more than two modalities, leading to a scalable methodology that can be used in a wide range of contexts.This work was supported by projects PGC2018- 098813-B-C32 and RTI2018-098913-B100 (Spanish “Ministerio de Ciencia, Innovación y Universidades”), UMA20-FEDERJA-086, A-TIC-080- UGR18 and P20 00525 (Consejería de economía y conocimiento, Junta de Andalucía) and by European Regional Development Funds (ERDF); and by Spanish “Ministerio de Universidades” through Margarita-Salas grant to J.E. Arco

    Deep Residual Transfer Learning for Automatic Diabetic Retinopathy Grading.

    Get PDF
    Evaluation and diagnosis of retina pathology is usually made via the analysis of different image modalities that allow to explore its structure. The most popular retina image method is retinography, a technique that displays the fundus of the eye, including the retina and other structures. Retinography is the most common imaging method to diagnose retina diseases such as Diabetic Retinopathy (DB) or Macular Edema (ME). However, retinography evaluation to score the image according to the disease grade presents difficulties due to differences in contrast, brightness and the presence of artifacts. Therefore, it is mainly done via manual analysis; a time consuming task that requires a trained clinician to examine and evaluate the images. In this paper, we present a computer aided diagnosis tool that takes advantage of the performance provided by deep learning architectures for image analysis. Our proposal is based on a deep residual convolutional neural network for extracting discriminatory features with no prior complex image transformations to enhance the image quality or to highlight specific structures. Moreover, we used the transfer learning paradigm to reuse layers from deep neural networks previously trained on the ImageNet dataset, under the hypothesis that first layers capture abstract features than can be reused for different problems. Experiments using different convolutional architectures have been carried out and their performance has been evaluated on the MESSIDOR database using cross-validation. Best results were found using a ResNet50-based architecture, showing an AUC of 0.93 for grades 0 + 1, AUC of 0.81 for grade 2 and AUC of 0.92 for grade 3 labelling, as well as AUCs higher than 0.97 when considering a binary classification problem (grades 0 vs 3).This work was partly supported by the MINECO/FEDER under TEC2015-64718-R, RTI2018-098913-B-I00, PSI2015-65848-R and PGC2018-098813-B-C32 projects. We gratefully acknowledge the support of NVIDIA Cor poration with the donation of one of the GPUs used for this research. Work by F.J.M.M. was supported by the MICINN “Juan de la Cierva - Formacion” Fellowship

    Empirical Functional PCA for 3D Image Feature Extraction Through Fractal Sampling.

    Get PDF
    Medical image classification is currently a challenging task that can be used to aid the diagnosis of different brain diseases. Thus, exploratory and discriminative analysis techniques aiming to obtain representative features from the images play a decisive role in the design of effective Computer Aided Diagnosis (CAD) systems, which is especially important in the early diagnosis of dementia. In this work, we present a technique that allows using specific time series analysis techniques with 3D images. This is achieved by sampling the image using a fractal-based method which preserves the spatial relationship among voxels. In addition, a method called Empirical functional PCA (EfPCA) is presented, which combines Empirical Mode Decomposition (EMD) with functional PCA to express an image in the space spanned by a basis of empirical functions, instead of using components computed by a predefined basis as in Fourier or Wavelet analysis. The devised technique has been used to classify images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Parkinson Progression Markers Initiative (PPMI), achieving accuracies up to 93% and 92% differential diagnosis tasks (AD versus controls and PD versus Controls, respectively). The results obtained validate the method, proving that the information retrieved by our methodology is significantly linked to the diseases.This work was partly supported by the MINECO/ FEDER under TEC2015-64718-R and PSI2015- 65848-R projects and the Consejer´ıa de Innovaci´on, Ciencia y Empresa (Junta de Andaluc´ıa, Spain) under the Excellence Project P11-TIC-7103 as well as the Salvador deMadariaga Mobility Grants 2017. Data collection and sharing for this project was funded by the ADNI (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Depart ment of Defense award number W81XWH-12-2- 0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contribu tions from the following: AbbVie, Alzheimer’s Asso ciation; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol Myer Squibb Company; CereSpir, Inc.; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Ho mann-La Roche Ltd and its ali ated company Genentech, Inc.; Fujirebio; GE Health care; IXICO Ltd.; Janssen Alzheimer Immunother apy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; P zer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clin ical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coor dinated by the Alzheimer’s Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern Cali fornia. PPMI a public-private partnership is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners, including [list the full names of all of the PPMI funding partners found at www.ppmi-info.org/fundingpartners]

    Label Aided Deep Ranking for the Automatic Diagnosis of Parkinsonian Syndromes.

    Get PDF
    Parkinsonism is the second most common neurodegenerative disease in the world. Its diagnosis usually relies on visual analysis of Emission Computed Tomography (SPECT) images acquired using 123I − io f lupane radiotracer. This aims to detect a deficit of dopamine transporters at the striatum. The use of Computer Aided tools for diagnosis based on statistical data processing and machine learning methods have significantly improved the diagnosis accuracy. In this paper we propose a classification method based on Deep Ranking which learns an embedding function that projects the source images into a new space in which samples belonging to the same class are closer to each other, while samples from different classes are moved apart. Moreover, the proposed approach introduces a new cost-sensitive loss function to avoid overfitting due to class imbalance (an usual issue in practical biomedical applications), along with label information to produce sparser embedding spaces. The experiments carried out in this work demonstrate the superiority of the proposed method, improving the diagnosis accuracy achieved by previous methodologies and validate our approach as an efficient way to construct linear classifiers.This work was partly supported by the MINECO/FEDER under TEC2015-64718- R and PSI2015-65848-R projects. We gratefully acknowledge the support of NVIDIA Corporation with the donation of one of the GPUs used for this research. PPMI - a pub435 lic - private partnership - is funded by The Michael J. Fox Foundation for Parkinson’s Research and funding partners, including Abbott, Biogen Idec, F. Hoffman-La Roche Ltd., GE Healthcare, Genentech and Pfizer Inc

    Using CT Data to Improve the Quantitative Analysis of 18F-FBB PET Neuroimages

    Get PDF
    18F-FBB PET is a neuroimaging modality that is been increasingly used to assess brain amyloid deposits in potential patients with Alzheimer’s disease (AD). In this work, we analyze the usefulness of these data to distinguish between AD and non-AD patients. A dataset with 18F-FBB PET brain images from 94 subjects diagnosed with AD and other disorders was evaluated by means of multiple analyses based on t-test, ANOVA, Fisher Discriminant Analysis and Support Vector Machine (SVM) classification. In addition, we propose to calculate amyloid standardized uptake values (SUVs) using only gray-matter voxels, which can be estimated using Computed Tomography (CT) images. This approach allows assessing potential brain amyloid deposits along with the gray matter loss and takes advantage of the structural information provided by most of the scanners used for PET examination, which allow simultaneous PET and CT data acquisition. The results obtained in this work suggest that SUVs calculated according to the proposed method allow AD and non-AD subjects to be more accurately differentiated than using SUVs calculated with standard approaches.This work was supported by the MINECO/FEDER under the TEC2012-34306 and TEC2015-64718-R projects and the Ministry of Economy, Innovation, Science and Employment of the Junta de Andalucía under the Excellence Project P11-TIC- 7103. The work was also supported by the Vicerectorate of Research and Knowledge Transfer of the University of Granada
    corecore