107 research outputs found

    Role of deep learning in infant brain MRI analysis

    Get PDF
    Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them

    Benchmark on automatic 6-month-old infant brain segmentation algorithms: the iSeg-2017 challenge

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Accurate segmentation of infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is an indispensable foundation for early studying of brain growth patterns and morphological changes in neurodevelopmental disorders. Nevertheless, in the isointense phase (approximately 6-9 months of age), due to inherent myelination and maturation process, WM and GM exhibit similar levels of intensity in both T1-weighted (T1w) and T2-weighted (T2w) MR images, making tissue segmentation very challenging. Despite many efforts were devoted to brain segmentation, only few studies have focused on the segmentation of 6-month infant brain images. With the idea of boosting methodological development in the community, iSeg-2017 challenge (http://iseg2017.web.unc.edu) provides a set of 6-month infant subjects with manual labels for training and testing the participating methods. Among the 21 automatic segmentation methods participating in iSeg-2017, we review the 8 top-ranked teams, in terms of Dice ratio, modified Hausdorff distance and average surface distance, and introduce their pipelines, implementations, as well as source codes. We further discuss limitations and possible future directions. We hope the dataset in iSeg-2017 and this review article could provide insights into methodological development for the community.Peer ReviewedPostprint (published version

    A COMPUTATIONAL FRAMEWORK FOR NEONATAL BRAIN MRI STRUCTURE SEGMENTATION AND CLASSIFICATION

    Get PDF
    Deep Learning is increasingly being used in both supervised and unsupervised learning to derive complex patterns from data. However, the successful implementation of deep learning using medical imaging requires careful consideration for the quality and availability of data. Infants diagnosed with CHD are at a higher risk for neurodevelopmental impairment. Many of these deficits may be attenuated by early detection and intervention. However, we currently lack effective diagnostic tools for the reliable detection of these disorders at the neonatal period. We believe that the structural correlates of the cognitive deficits associated with developmental abnormalities can be measured within the first few months of life. Based on this assumption, we hypothesize that we can use an atlas registration based structural segmentation pipeline to sufficiently reduce the search space of neonatal structural brain MRI to viably implement convolutional neural networks for dysplasia classification. Secondly, we hypothesize that convolutional neural networks can successfully identify morphological biomarkers capable of detecting structurally abnormal brain substructures. In this study, we develop a computational framework for the automated classification of dysplastic substructures from neonatal MRI. We validate our implementation on a dataset of neonates born with CHD, as this is a vulnerable population for structural dysmaturation. We chose the cerebellum as the initial test substructure because of its relatively simple structure and known vulnerability to structural dysplasia in infants born with CHD. We then apply the same method to the hippocampus, a more challenging substructure due to its complex morphological properties. We attempt to overcome the limited availability of clinical data in neonatal populations by first extracting each brain substructure of interest and individually registering them into a standard space. This greatly reduces the search space required to learn the subtle abnormalities associated with a given pathology, making it feasible to implement a 3-D CNN as the classification algorithm. We achieved excellent classification accuracy in detecting dysplastic cerebelli, and demonstrate a viable computational framework for search space reduction using limited clinical datasets. All methods developed in this work are designed to be extensible, reproducible, and generalizable diagnostic tools for future neuroimaging problems

    Developing Deep-Learning Methods for Diagnosis and Prognosis of Pediatric Progressive Diseases Using Modern Imaging Techniques

    Get PDF
    Purpose and Rationale. Central nervous system manifestations form a significant burden of disease in young children. There have been efforts to correlate the neurological disease state in tuberous sclerosis complex (TSC) neurological disease state with imaging findings is a standard part of patient care. However, such analysis of neuroimaging is time- and labor-intensive. Automated approaches to these tasks are needed to improve speed, accuracy, and availability. Automated medical image analysis tools based on 3D/2D deep learning algorithms can help improve the quality and consistency of image diagnosis and interpretation for cognitive disorders in infants. We propose to automate neuroimaging analysis with artificial intelligence algorithms. This novel approach can be used to improve the accuracy of TSC diagnosis and treatment. Deep learning (DL) is among the most successful types of machine learning and utilizes deep artificial neural networks (ANNs), which can determine efficient feature representations of input data. DL algorithms have created new opportunities in medical image analysis. Applications of DL, specifically convolutional neural networks (CNNs), in medical image analysis, cover a broad spectrum of tasks, including risk prediction/estimation with a machine learning system trained on these classification tasks. Study population. We reviewed an NIMH Data Archive (NDA) dataset that was collected in 2010. We also reviewed imaging data from patients and normal cases from birth to 8 years of age acquired at Le Bonheur Children’s Hospital from 2014 to 2020. The University of Tennessee Health Science Center Institutional Review Board (IRB) approved this study. Research Design and Study Procedures. Following Institutional Review Board (IRB) approval, this thesis: 1) Presents the first 2D/3D fusion CNN models to estimate the age of infants from birth to 3 years of age. 2) Presents the first work to look at whole-brain network to automatically distinguish TSC brain structural pathology from normal cases using a 3DCNN model. Conclusions. The study findings indicate that deep neural networks tackle the problem of early prediction of cognitive and neurodevelopmental disorders and structural brain pathology based on MRI automatically in TSC children. It is the hope of the author that analysis of MRI images via methods of deep learning will have a positive impact on healthcare for infants and children at risk of rare diseases

    Non-Euclidean, convolutional learning on cortical brain surfaces

    Get PDF
    In recent years there have been many studies indicating that multiple cortical features, extracted at each surface vertex, are promising in the detection of various neurodevelopmental and neurodegenerative diseases. However, with limited datasets, it is challenging to train stable classifiers with such high-dimensional surface data. This necessitates a feature reduction that is commonly accomplished via regional volumetric morphometry from standard brain atlases. However, current regional summaries are not specific to the given age or pathology that is studied, which runs the risk of losing relevant information that can be critical in the classification process. To solve this issue, this paper proposes a novel data-driven approach by extending convolutional neural networks (CNN) for use on non-Euclidean manifolds such as cortical surfaces. The proposed network learns the most powerful features and brain regions from the extracted large dimensional feature space; thus creating a new feature space in which the dimensionality is reduced and feature distributions are better separated. We demonstrate the usability of the proposed surface-CNN framework in an example study classifying Alzheimers disease patients versus normal controls. The high performance in the cross-validation diagnostic results shows the potential of our proposed prediction system

    Learning from Complex Neuroimaging Datasets

    Get PDF
    Advancements in Magnetic Resonance Imaging (MRI) allowed for the early diagnosis of neurodevelopmental disorders and neurodegenerative diseases. Neuroanatomical abnormalities in the cerebral cortex are often investigated by examining group-level differences of brain morphometric measures extracted from highly-sampled cortical surfaces. However, group-level differences do not allow for individual-level outcome prediction critical for the application to clinical practice. Despite the success of MRI-based deep learning frameworks, critical issues have been identified: (1) extracting accurate and reliable local features from the cortical surface, (2) determining a parsimonious subset of cortical features for correct disease diagnosis, (3) learning directly from a non-Euclidean high-dimensional feature space, (4) improving the robustness of multi-task multi-modal models, and (5) identifying anomalies in imbalanced and heterogeneous settings. This dissertation describes novel methodological contributions to tackle the challenges above. First, I introduce a Laplacian-based method for quantifying local Extra-Axial Cerebrospinal Fluid (EA-CSF) from structural MRI. Next, I describe a deep learning approach for combining local EA-CSF with other morphometric cortical measures for early disease detection. Then, I propose a data-driven approach for extending convolutional learning to non-Euclidean manifolds such as cortical surfaces. I also present a unified framework for robust multi-task learning from imaging and non-imaging information. Finally, I propose a semi-supervised generative approach for the detection of samples from untrained classes in imbalanced and heterogeneous developmental datasets. The proposed methodological contributions are evaluated by applying them to the early detection of Autism Spectrum Disorder (ASD) in the first year of the infant’s life. Also, the aging human brain is examined in the context of studying different stages of Alzheimer’s Disease (AD).Doctor of Philosoph
    • …
    corecore