57 research outputs found

    Automatic Autism Spectrum Disorder Detection Using Artificial Intelligence Methods with MRI Neuroimaging: A Review

    Full text link
    Autism spectrum disorder (ASD) is a brain condition characterized by diverse signs and symptoms that appear in early childhood. ASD is also associated with communication deficits and repetitive behavior in affected individuals. Various ASD detection methods have been developed, including neuroimaging modalities and psychological tests. Among these methods, magnetic resonance imaging (MRI) imaging modalities are of paramount importance to physicians. Clinicians rely on MRI modalities to diagnose ASD accurately. The MRI modalities are non-invasive methods that include functional (fMRI) and structural (sMRI) neuroimaging methods. However, the process of diagnosing ASD with fMRI and sMRI for specialists is often laborious and time-consuming; therefore, several computer-aided design systems (CADS) based on artificial intelligence (AI) have been developed to assist the specialist physicians. Conventional machine learning (ML) and deep learning (DL) are the most popular schemes of AI used for diagnosing ASD. This study aims to review the automated detection of ASD using AI. We review several CADS that have been developed using ML techniques for the automated diagnosis of ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of the studies developed using DL is provided in the appendix. Then, the challenges encountered during the automated diagnosis of ASD using MRI and AI techniques are described in detail. Additionally, a graphical comparison of studies using ML and DL to diagnose ASD automatically is discussed. We conclude by suggesting future approaches to detecting ASDs using AI techniques and MRI neuroimaging

    Machine Learning for the Diagnosis of Autism Spectrum Disorder

    Get PDF
    Autism Spectrum Disorder (ASD) is a neurological disorder. It refers to a wide range of behavioral and social abnormality and causes problems with social skills, repetitive behaviors, speech, and nonverbal communication. Even though there is no exact cure to ASD, an early diagnosis can help the patient take precautionary steps. Diagnosis of ASD has been of great interest recently, as researchers are yet to find a specific biomarker to detect the disease successfully. For the diagnosis of ASD, subjects need to go through behavioral observation and interview, which are not accurate sometimes. Also, there is a lack of dissimilarity between neuroimages of ASD subjects and healthy control (HC) subjects which make the use of neuroimages difficult for the diagnosis. So, machine learning-based approaches to diagnose ASD are becoming popular day by day. In the machine learning-based approach, features are extracted either from the functional MRI images or the structural MRI images to build the models. In this study at first, I created brain networks from the resting-state functional MRI (rs-fMRI) images, by using the 264 regions based parcellation scheme. These 264 regions capture the functional activity of the brain more accurately compared to regions defined in other parcellation schemes. Next, I extracted spectrum as a raw feature and combined it with other network based topological centralities: assortativity, clustering coefficient, the average degree. By applying a feature selection algorithm on the extracted features, I reduced the dimension of the features to cope up with overfitting. Then I used the selected features in support vector machine (SVM), K-nearest neighbor (KNN), linear discriminant analysis (LDA), and logistic regression (LR) for the diagnosis of ASD. Using the proposed method on Autism Brain Imaging Data Exchange (ABIDE) I achieved the classification accuracy of 78.4% for LDA, 77.0% for LR, 73.5% for SVM, and 73.8% for KNN. Next, I built a deep neural network model for the classification and feature selection using the autoencoder. In this approach, I used the previously defined features to build the DNN classifier. The DNN classifier is pre-trained using the autoencoder. Due to the pre-training, there has been a significant increase in the performance of the DNN classifier. I also proposed an autoencoder based feature selector. The latent space representation of the autoencoder is used to create a discriminate and compressed representation of the features. To make a more discriminate representation, the autoencoder is pre-trained with the DNN classifier. The classification accuracy of the DNN classifier and the autoencoder based feature selector is 79.2% and 74.6%, respectively. Finally, I studied the structural MRI images and proposed a convolutional autoencoder (CAE) based classification model. The T1-weighted MRI images without any pre-processing are used in this study. As the effect of age is very important when studying the structural images for the diagnosis of ASD, I used the ABIDE 1 dataset, which covers subjects with a wide range of ages. Using the proposed CAE based diagnosis method, I achieved a classification accuracy of 96.6%, which is better than any other study for the diagnosis of ASD using the ABIDE 1 dataset. The results of this thesis demonstrate that the spectrum of the brain networks is an essential feature for the diagnosis of ASD and rather than extracting features from the structural MRI image a more efficient way is to use the images directly into deep learning models. The proposed studies in this thesis can help to build an early diagnosis model for ASD

    Multimodal Data Fusion and Quantitative Analysis for Medical Applications

    Get PDF
    Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors
    • …
    corecore