222 research outputs found

    Identification of MCI individuals using structural and functional connectivity networks

    Get PDF
    Different imaging modalities provide essential complementary information that can be used to enhance our understanding of brain disorders. This study focuses on integrating multiple imaging modalities to identify individuals at risk for mild cognitive impairment (MCI). MCI, often an early stage of Alzheimer’s disease (AD), is difficult to diagnose due to its very mild or insignificant symptoms of cognitive impairment. Recent emergence of brain network analysis has made characterization of neurological disorders at a whole-brain connectivity level possible, thus providing new avenues for brain diseases classification. Employing multiple-kernel Support Vector Machines (SVMs), we attempt to integrate information from diffusion tensor imaging (DTI) and resting-state functional magnetic resonance imaging (rs-fMRI) for improving classification performance. Our results indicate that the multimodality classification approach yields statistically significant improvement in accuracy over using each modality independently. The classification accuracy obtained by the proposed method is 96.3%, which is an increase of at least 7.4% from the single modality-based methods and the direct data fusion method. A cross-validation estimation of the generalization performance gives an area of 0.953 under the receiver operating characteristic (ROC) curve, indicating excellent diagnostic power. The multimodality classification approach hence allows more accurate early detection of brain abnormalities with greater sensitivity

    Deep learning-based multimodality classification of chronic mild traumatic brain injury using resting-state functional MRI and PET imaging

    Get PDF
    Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79–91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings

    Computational Language Assessment in patients with speech, language, and communication impairments

    Full text link
    Speech, language, and communication symptoms enable the early detection, diagnosis, treatment planning, and monitoring of neurocognitive disease progression. Nevertheless, traditional manual neurologic assessment, the speech and language evaluation standard, is time-consuming and resource-intensive for clinicians. We argue that Computational Language Assessment (C.L.A.) is an improvement over conventional manual neurological assessment. Using machine learning, natural language processing, and signal processing, C.L.A. provides a neuro-cognitive evaluation of speech, language, and communication in elderly and high-risk individuals for dementia. ii. facilitates the diagnosis, prognosis, and therapy efficacy in at-risk and language-impaired populations; and iii. allows easier extensibility to assess patients from a wide range of languages. Also, C.L.A. employs Artificial Intelligence models to inform theory on the relationship between language symptoms and their neural bases. It significantly advances our ability to optimize the prevention and treatment of elderly individuals with communication disorders, allowing them to age gracefully with social engagement.Comment: 36 pages, 2 figures, to be submite

    Deep Learning-Based Multimodality Classification of Chronic Mild Traumatic Brain Injury Using Resting-State Functional MRI and PET Imaging

    Get PDF
    Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79–91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings

    Hybrid High-order Functional Connectivity Networks Using Resting-state Functional MRI for Mild Cognitive Impairment Diagnosis

    Get PDF
    Conventional functional connectivity (FC), referred to as low-order FC, estimates temporal correlation of the resting-state functional magnetic resonance imaging (rs-fMRI) time series between any pair of brain regions, simply ignoring the potentially high-level relationship among these brain regions. A high-order FC based on "correlation's correlation" has emerged as a new approach for abnormality detection of brain disease. However, separate construction of the low- and high-order FC networks overlooks information exchange between the two FC levels. Such a higher-level relationship could be more important for brain diseases study. In this paper, we propose a novel framework, namely "hybrid high-order FC networks" by exploiting the higher-level dynamic interaction among brain regions for early mild cognitive impairment (eMCI) diagnosis. For each sliding window-based rs-fMRI sub-series, we construct a whole-brain associated high-order network, by estimating the correlations between the topographical information of the high-order FC sub-network from one brain region and that of the low-order FC sub-network from another brain region. With multi-kernel learning, complementary features from multiple time-varying FC networks constructed at different levels are fused for eMCI classification. Compared with other state-of-the-art methods, the proposed framework achieves superior diagnosis accuracy, and hence could be promising for understanding pathological changes of brain connectome

    Hyper-connectivity of functional networks for brain disease diagnosis

    Get PDF
    Exploring structural and functional interactions among various brain regions enables better understanding of pathological underpinnings of neurological disorders. Brain connectivity network, as a simplified representation of those structural and functional interactions, has been widely used for diagnosis and classification of neurodegenerative diseases, especially for Alzheimer’s disease (AD) and its early stage - mild cognitive impairment (MCI). However, the conventional functional connectivity network is usually constructed based on the pairwise correlation among different brain regions and thus ignores their higher-order relationships. Such loss of high-order information could be important for disease diagnosis, since neurologically a brain region predominantly interacts with more than one other brain regions. Accordingly, in this paper, we propose a novel framework for estimating the hyper-connectivity network of brain functions and then use this hyper-network for brain disease diagnosis. Here, the functional connectivity hyper-network denotes a network where each of its edges representing the interactions among multiple brain regions (i.e., an edge can connect with more than two brain regions), which can be naturally represented by a hyper-graph. Specifically, we first construct connectivity hyper-networks from the resting-state fMRI (R-fMRI) time series by using sparse representation. Then, we extract three sets of brain-region specific features from the connectivity hyper-networks, and further exploit a manifold regularized multi-task feature selection method to jointly select the most discriminative features. Finally, we use multi-kernel support vector machine (SVM) for classification. The experimental results on both MCI dataset and attention deficit hyperactivity disorder (ADHD) dataset demonstrate that, compared with the conventional connectivity network-based methods, the proposed method can not only improve the classification performance, but also help discover disease-related biomarkers important for disease diagnosis

    Hierarchical Graph Convolutional Network Built by Multiscale Atlases for Brain Disorder Diagnosis Using Functional Connectivity

    Full text link
    Functional connectivity network (FCN) data from functional magnetic resonance imaging (fMRI) is increasingly used for the diagnoses of brain disorders. However, state-of-the-art studies used to build the FCN using a single brain parcellation atlas at a certain spatial scale, which largely neglected functional interactions across different spatial scales in hierarchical manners. In this study, we propose a novel framework to perform multiscale FCN analysis for brain disorder diagnosis. We first use a set of well-defined multiscale atlases to compute multiscale FCNs. Then, we utilize biologically meaningful brain hierarchical relationships among the regions in multiscale atlases to perform nodal pooling across multiple spatial scales, namely "Atlas-guided Pooling". Accordingly, we propose a Multiscale-Atlases-based Hierarchical Graph Convolutional Network (MAHGCN), built on the stacked layers of graph convolution and the atlas-guided pooling, for a comprehensive extraction of diagnostic information from multiscale FCNs. Experiments on neuroimaging data from 1792 subjects demonstrate the effectiveness of our proposed method in the diagnoses of Alzheimer's disease (AD), the prodromal stage of AD (i.e., mild cognitive impairment [MCI]), as well as autism spectrum disorder (ASD), with accuracy of 88.9%, 78.6%, and 72.7% respectively. All results show significant advantages of our proposed method over other competing methods. This study not only demonstrates the feasibility of brain disorder diagnosis using resting-state fMRI empowered by deep learning, but also highlights that the functional interactions in the multiscale brain hierarchy are worth being explored and integrated into deep learning network architectures for better understanding the neuropathology of brain disorders

    Identification of progressive mild cognitive impairment patients using incomplete longitudinal MRI scans

    Get PDF
    Distinguishing progressive mild cognitive impairment (pMCI) from stable mild cognitive impairment (sMCI) is critical for identification of patients who are at-risk for Alzheimer’s disease (AD), so that early treatment can be administered. In this paper, we propose a pMCI/sMCI classification framework that harnesses information available in longitudinal magnetic resonance imaging (MRI) data, which could be incomplete, to improve diagnostic accuracy. Volumetric features were first extracted from the baseline MRI scan and subsequent scans acquired after 6, 12, and 18 months. Dynamic features were then obtained by using the 18th-month scan as the reference and computing the ratios of feature differences for the earlier scans. Features that are linearly or non-linearly correlated with diagnostic labels are then selected using two elastic net sparse learning algorithms. Missing feature values due to the incomplete longitudinal data are imputed using a low-rank matrix completion method. Finally, based on the completed feature matrix, we build a multi-kernel support vector machine (mkSVM) to predict the diagnostic label of samples with unknown diagnostic statuses. Our evaluation indicates that a diagnosis accuracy as high as 78.2% can be achieved when information from the longitudinal scans is used – 6.6% higher than the case using only the reference time point image. In other words, information provided by the longitudinal history of the disease improves diagnosis accuracy
    • …
    corecore