232 research outputs found
Multichannel Deep Attention Neural Networks for the Classification of Autism Spectrum Disorder Using Neuroimaging and Personal Characteristic Data
Autism spectrum disorder (ASD) is a developmental disorder that impacts more than 1.6% of children aged 8 across the United States. It is characterized by impairments in social interaction and communication, as well as by a restricted repertoire of activity and interests. The current standardized clinical diagnosis of ASD remains to be a subjective diagnosis, mainly relying on behavior-based tests. However, the diagnostic process for ASD is not only time consuming, but also costly, causing a tremendous financial burden for patientsâ families. Therefore, automated diagnosis approaches have been an attractive solution for earlier identification of ASD. In this work, we set to develop a deep learning model for automated diagnosis of ASD. Specifically, a multichannel deep attention neural network (DANN) was proposed by integrating multiple layers of neural networks, attention mechanism, and feature fusion to capture the interrelationships in multimodality data. We evaluated the proposed multichannel DANN model on the Autism Brain Imaging Data Exchange (ABIDE) repository with 809 subjects (408 ASD patients and 401 typical development controls). Our model achieved a state-of-the-art accuracy of 0.732 on ASD classification by integrating three scales of brain functional connectomes and personal characteristic data, outperforming multiple peer machine learning models in a k-fold cross validation experiment. Additional k-fold and leave-one-site-out cross validation were conducted to test the generalizability and robustness of the proposed multichannel DANN model. The results show promise for deep learning models to aid the future automated clinical diagnosis of ASD.</jats:p
Analysis of Brain Imaging Data for the Detection of Early Age Autism Spectrum Disorder Using Transfer Learning Approaches for Internet of Things
In recent years, advanced magnetic resonance imaging (MRI) methods including functional magnetic resonance imaging (fMRI) and structural magnetic resonance imaging (sMRI) have indicated an increase in the prevalence of neuropsychiatric disorders such as autism spectrum disorder (ASD), effects one out of six children worldwide. Data driven techniques along with medical image analysis techniques, such as computer-assisted diagnosis (CAD), benefiting from deep learning. With the use of artificial intelligence (AI) and IoT-based intelligent approaches, it would be convenient to support autistic children to adopt the new atmospheres. In this paper, we classify and represent learning tasks of the most powerful deep learning network such as convolution neural network (CNN) and transfer learning algorithm on a combination of data from autism brain imaging data exchange (ABIDE I and ABIDE II) datasets. Due to their four-dimensional nature (three spatial dimensions and one temporal dimension), the resting state-fMRI (rs-fMRI) data can be used to develop diagnostic biomarkers for brain dysfunction. ABIDE is a collaboration of global scientists, where ABIDE-I and ABIDE-II consists of 1112 rs-fMRI datasets from 573 typical control (TC) and 539 autism individuals, and 1114 rs-fMRI from 521 autism and 593 typical control individuals respectively, which were collected from 17 different sites. Our proposed optimized version of CNN achieved 81.56% accuracy. This outperforms prior conventional approaches presented only on the ABIDE I datasets
Automatic Autism Spectrum Disorder Detection Using Artificial Intelligence Methods with MRI Neuroimaging: A Review
Autism spectrum disorder (ASD) is a brain condition characterized by diverse
signs and symptoms that appear in early childhood. ASD is also associated with
communication deficits and repetitive behavior in affected individuals. Various
ASD detection methods have been developed, including neuroimaging modalities
and psychological tests. Among these methods, magnetic resonance imaging (MRI)
imaging modalities are of paramount importance to physicians. Clinicians rely
on MRI modalities to diagnose ASD accurately. The MRI modalities are
non-invasive methods that include functional (fMRI) and structural (sMRI)
neuroimaging methods. However, the process of diagnosing ASD with fMRI and sMRI
for specialists is often laborious and time-consuming; therefore, several
computer-aided design systems (CADS) based on artificial intelligence (AI) have
been developed to assist the specialist physicians. Conventional machine
learning (ML) and deep learning (DL) are the most popular schemes of AI used
for diagnosing ASD. This study aims to review the automated detection of ASD
using AI. We review several CADS that have been developed using ML techniques
for the automated diagnosis of ASD using MRI modalities. There has been very
limited work on the use of DL techniques to develop automated diagnostic models
for ASD. A summary of the studies developed using DL is provided in the
appendix. Then, the challenges encountered during the automated diagnosis of
ASD using MRI and AI techniques are described in detail. Additionally, a
graphical comparison of studies using ML and DL to diagnose ASD automatically
is discussed. We conclude by suggesting future approaches to detecting ASDs
using AI techniques and MRI neuroimaging
Automatic autism spectrum disorder detection using artificial intelligence methods with MRI neuroimaging: A review
Autism spectrum disorder (ASD) is a brain condition characterized by
diverse signs and symptoms that appear in early childhood. ASD is also
associated with communication deficits and repetitive behavior in affected
individuals. Various ASD detection methods have been developed, including
neuroimaging modalities and psychological tests. Among these methods,
magnetic resonance imaging (MRI) imaging modalities are of paramount
importance to physicians. Clinicians rely on MRI modalities to diagnose
ASD accurately. The MRI modalities are non-invasive methods that include
functional (fMRI) and structural (sMRI) neuroimaging methods. However,
diagnosing ASD with fMRI and sMRI for specialists is often laborious and
time-consuming; therefore, several computer-aided design systems (CADS)
based on artificial intelligence (AI) have been developed to assist specialist
physicians. Conventional machine learning (ML) and deep learning (DL) are
the most popular schemes of AI used for diagnosing ASD. This study aims to
review the automated detection of ASD using AI. We review several CADS that
have been developed using ML techniques for the automated diagnosis of
ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of
the studies developed using DL is provided in the Supplementary Appendix.
Then, the challenges encountered during the automated diagnosis of ASD
using MRI and AI techniques are described in detail. Additionally, a graphical
comparison of studies using ML and DL to diagnose ASD automatically
is discussed. We suggest future approaches to detecting ASDs using AI
techniques and MRI neuroimaging.Qatar National
Librar
Automatic autism spectrum disorder detection using artificial intelligence methods with MRI neuroimaging: A review
Autism spectrum disorder (ASD) is a brain condition characterized by diverse signs and symptoms that appear in early childhood. ASD is also associated with communication deficits and repetitive behavior in affected individuals. Various ASD detection methods have been developed, including neuroimaging modalities and psychological tests. Among these methods, magnetic resonance imaging (MRI) imaging modalities are of paramount importance to physicians. Clinicians rely on MRI modalities to diagnose ASD accurately. The MRI modalities are non-invasive methods that include functional (fMRI) and structural (sMRI) neuroimaging methods. However, diagnosing ASD with fMRI and sMRI for specialists is often laborious and time-consuming; therefore, several computer-aided design systems (CADS) based on artificial intelligence (AI) have been developed to assist specialist physicians. Conventional machine learning (ML) and deep learning (DL) are the most popular schemes of AI used for diagnosing ASD. This study aims to review the automated detection of ASD using AI. We review several CADS that have been developed using ML techniques for the automated diagnosis of ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of the studies developed using DL is provided in the Supplementary Appendix. Then, the challenges encountered during the automated diagnosis of ASD using MRI and AI techniques are described in detail. Additionally, a graphical comparison of studies using ML and DL to diagnose ASD automatically is discussed. We suggest future approaches to detecting ASDs using AI techniques and MRI neuroimaging
Resting-State Functional MRI Adaptation with Attention Graph Convolution Network for Brain Disorder Identification
Multi-site resting-state functional magnetic resonance imaging (rs-fMRI) data can facilitate learning-based approaches to train reliable models on more data. However, significant data heterogeneity between imaging sites, caused by different scanners or protocols, can negatively impact the generalization ability of learned models. In addition, previous studies have shown that graph convolution neural networks (GCNs) are effective in mining fMRI biomarkers. However, they generally ignore the potentially different contributions of brain regions- of-interest (ROIs) to automated disease diagnosis/prognosis. In this work, we propose a multi-site rs-fMRI adaptation framework with attention GCN (A2GCN) for brain disorder identification. Specifically, the proposed A2GCN consists of three major components: (1) a node representation learning module based on GCN to extract rs-fMRI features from functional connectivity networks, (2) a node attention mechanism module to capture the contributions of ROIs, and (3) a domain adaptation module to alleviate the differences in data distribution between sites through the constraint of mean absolute error and covariance. The A2GCN not only reduces data heterogeneity across sites, but also improves the interpretability of the learning algorithm by exploring important ROIs. Experimental results on the public ABIDE database demonstrate that our method achieves remarkable performance in fMRI-based recognition of autism spectrum disorders
Learning to Fuse Multiple Brain Functional Networks for Automated Autism Identification
Functional connectivity network (FCN) has become a popular tool to identify potential biomarkers for brain dysfunction, such as autism spectrum disorder (ASD). Due to its importance, researchers have proposed many methods to estimate FCNs from resting-state functional MRI (rs-fMRI) data. However, the existing FCN estimation methods usually only capture a single relationship between brain regions of interest (ROIs), e.g., linear correlation, nonlinear correlation, or higher-order correlation, thus failing to model the complex interaction among ROIs in the brain. Additionally, such traditional methods estimate FCNs in an unsupervised way, and the estimation process is independent of the downstream tasks, which makes it difficult to guarantee the optimal performance for ASD identification. To address these issues, in this paper, we propose a multi-FCN fusion framework for rs-fMRI-based ASD classification. Specifically, for each subject, we first estimate multiple FCNs using different methods to encode rich interactions among ROIs from different perspectives. Then, we use the label information (ASD vs. healthy control (HC)) to learn a set of fusion weights for measuring the importance/discrimination of those estimated FCNs. Finally, we apply the adaptively weighted fused FCN on the ABIDE dataset to identify subjects with ASD from HCs. The proposed FCN fusion framework is straightforward to implement and can significantly improve diagnostic accuracy compared to traditional and state-of-the-art methods
Recommended from our members
Application of Deep Learning to Brain Connectivity Classification in Large MRI Datasets
The use of machine learning for whole-brain classification of magnetic resonance imaging (MRI) data is of clear interest, both for understanding phenotypic differences in brain structure and function and for diagnostic applications. Developments of deep learning models in the past decade have revolutionized photographic image and speech recognition, bringing promise to do the same to other fields of science. However, there are many practical and theoretical challenges in the translation of such methods to the unique context of MRIs of the brain. This thesis presents a theoretical underpinning for whole-brain classification of extremely large datasets of multi-site MRIs, including machine learning model architecture, dataset curation methods, machine learning visualization methods, encoding of MRI data, and feature extraction. To replicate large sample sizes typically applied to deep learning models, a dataset of over 50,000 functional and structural MRIs was amassed from nine different databases, and the undertaken analyses were conducted on three covariates commonly found across these collections: sex, resting state/task, and autism spectrum disorder. I find that deep learning is not only a method that has promise for clinical application in the future, but also a powerful statistical tool for analyzing complex, nonlinear relationships in brain data where conventional statistics may fail. However, results are also dependent on factors such as dataset imbalances, confounding factors such as motion and head size, selected methods of encoding MRI data, variability of machine learning models and selected methods of visualizing the machine learning results. In this thesis, I present the following methodological innovations: (1) a method of balancing datasets as a means of regressing out measurable confounding factors; (2) a means of removing spatial biases from deep learning visualization methods; (3) methods of encoding functional and structural datasets as connectivity matrices; (4) the use of ensemble models and convolutional neural network architectures to improve classification accuracy and consistency; (5) adaptation of deep learning visualization methods to study brain connections utilized in the classification process. Additionally, I discuss interpretations, limitations, and future directions of this research.Gates Cambridge Scholarshi
Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends
Financiado para publicaciĂłn en acceso aberto: Universidad de Granada / CBUA.[Abstract]: Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.Funding for open access charge: Universidad de Granada / CBUA. The work reported here has been partially funded by many public and private bodies: by the MCIN/AEI/10.13039/501100011033/ and FEDER âUna manera de hacer Europaâ under the RTI2018-098913-B100 project, by the Consejeria de Economia, Innovacion, Ciencia y Empleo (Junta de Andalucia) and FEDER under CV20-45250, A-TIC-080-UGR18, B-TIC-586-UGR20 and P20-00525 projects, and by the Ministerio de Universidades under the FPU18/04902 grant given to C. Jimenez-Mesa, the Margarita-Salas grant to J.E. Arco, and the Juan de la Cierva grant to D. Castillo-Barnes.
This work was supported by projects PGC2018-098813-B-C32 & RTI2018-098913-B100 (Spanish âMinisterio de Ciencia, InnovacĂłn y Universidadesâ), P18-RT-1624, UMA20-FEDERJA-086, CV20-45250, A-TIC-080-UGR18 and P20 00525 (ConsejerĂa de econnomĂa y conocimiento, Junta de AndalucĂa) and by European Regional Development Funds (ERDF). M.A. Formoso work was supported by Grant PRE2019-087350 funded by MCIN/AEI/10.13039/501100011033 by âESF Investing in your futureâ. Work of J.E. Arco was supported by Ministerio de Universidades, Gobierno de España through grant âMargarita Salasâ.
The work reported here has been partially funded by Grant PID2020-115220RB-C22 funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by âERDF A way of making Europeâ, by the âEuropean Unionâ or by the âEuropean Union NextGenerationEU/PRTRâ.
The work of Paulo Novais is financed by National Funds through the Portuguese funding agency, FCT - FundaçaÌo para a CiĂȘncia e a Tecnologia within project DSAIPA/AI/0099/2019.
Ramiro Varela was supported by the Spanish State Agency for Research (AEI) grant PID2019-106263RB-I00.
JosĂ© Santos was supported by the Xunta de Galicia and the European Union (European Regional Development Fund - Galicia 2014â2020 Program), with grants CITIC (ED431G 2019/01), GPC ED431B 2022/33, and by the Spanish Ministry of Science and Innovation (project PID2020-116201GB-I00). The work reported here has been partially funded by Project Fondecyt 1201572 (ANID).
The work reported here has been partially funded by Project Fondecyt 1201572 (ANID).
In [247], the project has received funding by grant RTI2018-098969-B-100 from the Spanish Ministerio de Ciencia InnovaciĂłn y Universidades and by grant PROMETEO/2019/119 from the Generalitat Valenciana (Spain). In [248], the research work has been partially supported by the National Science Fund of Bulgaria (scientific project âDigital Accessibility for People with Special Needs: Methodology, Conceptual Models and Innovative Ecosystemsâ), Grant Number KP-06-N42/4, 08.12.2020; EC for project CybSPEED, 777720, H2020-MSCA-RISE-2017 and OP Science and Education for Smart Growth (2014â2020) for project Competence Center âIntelligent mechatronic, eco- and energy saving sytems and technologiesâBG05M2OP001-1.002-0023.
The work reported here has been partially funded by the support of MICIN project PID2020-116346GB-I00.
The work reported here has been partially funded by many public and private bodies: by MCIN/AEI/10.13039/501100011033 and âERDF A way to make Europeâ under the PID2020-115220RB-C21 and EQC2019-006063-P projects; by MCIN/AEI/10.13039/501100011033 and âESF Investing in your futureâ under FPU16/03740 grant; by the CIBERSAM of the Instituto de Salud Carlos III; by MinCiencias project 1222-852-69927, contract 495-2020.
The work is partially supported by the Autonomous Government of Andalusia (Spain) under project UMA18-FEDERJA-084, project name Detection of anomalous behavior agents by DL in low-cost video surveillance intelligent systems. Authors gratefully acknowledge the support of NVIDIA Corporation with the donation of a RTX A6000 48 Gb.
This work was conducted in the context of the Horizon Europe project PRE-ACT, and it has received funding through the European Commission Horizon Europe Program (Grant Agreement number: 101057746). In addition, this work was supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract nummber 22 00058.
S.B Cho was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)).Junta de AndalucĂa; CV20-45250Junta de AndalucĂa; A-TIC-080-UGR18Junta de AndalucĂa; B-TIC-586-UGR20Junta de AndalucĂa; P20-00525Junta de AndalucĂa; P18-RT-1624Junta de AndalucĂa; UMA20-FEDERJA-086Portugal. Fundação para a CiĂȘncia e a Tecnologia; DSAIPA/AI/0099/2019Xunta de Galicia; ED431G 2019/01Xunta de Galicia; GPC ED431B 2022/33Chile. Agencia Nacional de InvestigaciĂłn y Desarrollo; 1201572Generalitat Valenciana; PROMETEO/2019/119Bulgarian National Science Fund; KP-06-N42/4Bulgaria. Operational Programme Science and Education for Smart Growth; BG05M2OP001-1.002-0023Colombia. Ministerio de Ciencia, TecnologĂa e InnovaciĂłn; 1222-852-69927Junta de AndalucĂa; UMA18-FEDERJA-084SuĂza. State Secretariat for Education, Research and Innovation; 22 00058Institute of Information & Communications Technology Planning & Evaluation (Corea del Sur); 2020-0-0136
- âŠ