1,194 research outputs found

    Deep Neural Networks for Anatomical Brain Segmentation

    Full text link
    We present a novel approach to automatically segment magnetic resonance (MR) images of the human brain into anatomical regions. Our methodology is based on a deep artificial neural network that assigns each voxel in an MR image of the brain to its corresponding anatomical region. The inputs of the network capture information at different scales around the voxel of interest: 3D and orthogonal 2D intensity patches capture the local spatial context while large, compressed 2D orthogonal patches and distances to the regional centroids enforce global spatial consistency. Contrary to commonly used segmentation methods, our technique does not require any non-linear registration of the MR images. To benchmark our model, we used the dataset provided for the MICCAI 2012 challenge on multi-atlas labelling, which consists of 35 manually segmented MR images of the brain. We obtained competitive results (mean dice coefficient 0.725, error rate 0.163) showing the potential of our approach. To our knowledge, our technique is the first to tackle the anatomical segmentation of the whole brain using deep neural networks

    Multi-Kernel Capsule Network for Schizophrenia Identification

    Get PDF
    Schizophrenia seriously affects the quality of life. To date, both simple (e.g., linear discriminant analysis) and complex (e.g., deep neural network) machine learning methods have been utilized to identify schizophrenia based on functional connectivity features. The existing simple methods need two separate steps (i.e., feature extraction and classification) to achieve the identification, which disables simultaneous tuning for the best feature extraction and classifier training. The complex methods integrate two steps and can be simultaneously tuned to achieve optimal performance, but these methods require a much larger amount of data for model training. To overcome the aforementioned drawbacks, we proposed a multi-kernel capsule network (MKCapsnet), which was developed by considering the brain anatomical structure. Kernels were set to match with partition sizes of brain anatomical structure in order to capture interregional connectivities at the varying scales. With the inspiration of widely-used dropout strategy in deep learning, we developed capsule dropout in the capsule layer to prevent overfitting of the model. The comparison results showed that the proposed method outperformed the state-of-the-art methods. Besides, we compared performances using different parameters and illustrated the routing process to reveal characteristics of the proposed method. MKCapsnet is promising for schizophrenia identification. Our study first utilized capsule neural network for analyzing functional connectivity of magnetic resonance imaging (MRI) and proposed a novel multi-kernel capsule structure with consideration of brain anatomical parcellation, which could be a new way to reveal brain mechanisms. In addition, we provided useful information in the parameter setting, which is informative for further studies using a capsule network for other neurophysiological signal classification

    Diagnosis of Autism Spectrum Disorders Using Temporally Distinct Resting-State Functional Connectivity Networks

    Get PDF
    Resting-state functional magnetic resonance imaging (R-fMRI) is dynamic in nature since neural activities constantly change over the time and are dominated by repeating brief activations and deactivations involving many brain regions. Each region participates in multiple brain functions and is part of various functionally distinct but spatially overlapping networks. Functional connectivity computed as correlations over the entire time series always overlooks inter-region interactions that often occur repeatedly and dynamically in time, limiting its application to disease diagnosis

    Multi-Kernel Learning with Dartel Improves Combined MRI-PET Classification of Alzheimer’s Disease in AIBL Data: Group and Individual Analyses

    Get PDF
    Magnetic resonance imaging (MRI) and positron emission tomography (PET) are neuroimaging modalities typically used for evaluating brain changes in Alzheimer’s disease (AD). Due to their complementary nature, their combination can provide more accurate AD diagnosis or prognosis. In this work, we apply a multi-modal imaging machine-learning framework to enhance AD classification and prediction of diagnosis of subject-matched gray matter MRI and Pittsburgh compound B (PiB)-PET data related to 58 AD, 108 mild cognitive impairment (MCI) and 120 healthy elderly (HE) subjects from the Australian imaging, biomarkers and lifestyle (AIBL) dataset. Specifically, we combined a Dartel algorithm to enhance anatomical registration with multi-kernel learning (MKL) technique, yielding an average of >95% accuracy for three binary classification problems: AD-vs.-HE, MCI-vs.-HE and AD-vs.-MCI, a considerable improvement from individual modality approach. Consistent with t-contrasts, the MKL weight maps revealed known brain regions associated with AD, i.e., (para)hippocampus, posterior cingulate cortex and bilateral temporal gyrus. Importantly, MKL regression analysis provided excellent predictions of diagnosis of individuals by r2 = 0.86. In addition, we found significant correlations between the MKL classification and delayed memory recall scores with r2 = 0.62 (p < 0.01). Interestingly, outliers in the regression model for diagnosis were mainly converter samples with a higher likelihood of converting to the inclined diagnostic category. Overall, our work demonstrates the successful application of MKL with Dartel on combined neuromarkers from different neuroimaging modalities in the AIBL data. This lends further support in favor of machine learning approach in improving the diagnosis and risk prediction of AD

    White matter differences between healthy young ApoE4 carriers and non-carriers identified with tractography and support vector machines.

    Get PDF
    The apolipoprotein E4 (ApoE4) is an established risk factor for Alzheimer's disease (AD). Previous work has shown that this allele is associated with functional (fMRI) changes as well structural grey matter (GM) changes in healthy young, middle-aged and older subjects. Here, we assess the diffusion characteristics and the white matter (WM) tracts of healthy young (20-38 years) ApoE4 carriers and non-carriers. No significant differences in diffusion indices were found between young carriers (ApoE4+) and non-carriers (ApoE4-). There were also no significant differences between the groups in terms of normalised GM or WM volume. A feature selection algorithm (ReliefF) was used to select the most salient voxels from the diffusion data for subsequent classification with support vector machines (SVMs). SVMs were capable of classifying ApoE4 carrier and non-carrier groups with an extremely high level of accuracy. The top 500 voxels selected by ReliefF were then used as seeds for tractography which identified a WM network that included regions of the parietal lobe, the cingulum bundle and the dorsolateral frontal lobe. There was a non-significant decrease in volume of this WM network in the ApoE4 carrier group. Our results indicate that there are subtle WM differences between healthy young ApoE4 carriers and non-carriers and that the WM network identified may be particularly vulnerable to further degeneration in ApoE4 carriers as they enter middle and old age

    Deep Learning in Medical Image Analysis

    Get PDF
    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements
    corecore