93 research outputs found
Role of deep learning in infant brain MRI analysis
Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them
A 3D Fully Convolutional Neural Network With Top-Down Attention-Guided Refinement for Accurate and Robust Automatic Segmentation of Amygdala and Its Subnuclei
Recent advances in deep learning have improved the segmentation accuracy of subcortical brain structures, which would be useful in neuroimaging studies of many neurological disorders. However, most existing deep learning based approaches in neuroimaging do not investigate the specific difficulties that exist in segmenting extremely small but important brain regions such as the subnuclei of the amygdala. To tackle this challenging task, we developed a dual-branch dilated residual 3D fully convolutional network with parallel convolutions to extract more global context and alleviate the class imbalance issue by maintaining a small receptive field that is just the size of the regions of interest (ROIs). We also conduct multi-scale feature fusion in both parallel and series to compensate the potential information loss during convolutions, which has been shown to be important for small objects. The serial feature fusion enabled by residual connections is further enhanced by a proposed top-down attention-guided refinement unit, where the high-resolution low-level spatial details are selectively integrated to complement the high-level but coarse semantic information, enriching the final feature representations. As a result, the segmentations resulting from our method are more accurate both volumetrically and morphologically, compared with other deep learning based approaches. To the best of our knowledge, this work is the first deep learning-based approach that targets the subregions of the amygdala. We also demonstrated the feasibility of using a cycle-consistent generative adversarial network (CycleGAN) to harmonize multi-site MRI data, and show that our method generalizes well to challenging traumatic brain injury (TBI) datasets collected from multiple centers. This appears to be a promising strategy for image segmentation for multiple site studies and increased morphological variability from significant brain pathology
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Improved Segmentation of the Intracranial and Ventricular Volumes in Populations with Cerebrovascular Lesions and Atrophy Using 3D CNNs
Successful segmentation of the total intracranial vault (ICV) and ventricles is of critical importance when studying neurodegeneration through neuroimaging. We present iCVMapper and VentMapper, robust algorithms that use a convolutional neural network (CNN) to segment the ICV and ventricles from both single and multi-contrast MRI data. Our models were trained on a large dataset from two multi-site studies (N = 528 subjects for ICV, N = 501 for ventricular segmentation) consisting of older adults with varying degrees of cerebrovascular lesions and atrophy, which pose significant challenges for most segmentation approaches. The models were tested on 238 participants, including subjects with vascular cognitive impairment and high white matter hyperintensity burden. Two of the three test sets came from studies not used in the training dataset. We assessed our algorithms relative to four state-of-the-art ICV extraction methods (MONSTR, BET, Deep Extraction, FreeSurfer, DeepMedic), as well as two ventricular segmentation tools (FreeSurfer, DeepMedic). Our multi-contrast models outperformed other methods across many of the evaluation metrics, with average Dice coefficients of 0.98 and 0.96 for ICV and ventricular segmentation respectively. Both models were also the most time efficient, segmenting the structures in orders of magnitude faster than some of the other available methods. Our networks showed an increased accuracy with the use of a conditional random field (CRF) as a post-processing step. We further validated both segmentation models, highlighting their robustness to images with lower resolution and signal-to-noise ratio, compared to tested techniques. The pipeline and models are available at: https://icvmapp3r.readthedocs.io and https://ventmapp3r.readthedocs.io to enable further investigation of the roles of ICV and ventricles in relation to normal aging and neurodegeneration in large multi-site studies
Complex Grey Matter Structure Segmentation in Brains via Deep Learning: Example of the Claustrum
Segmentationand parcellation of the brain has been widely performed on brain
MRI using atlas-based methods. However, segmentation of the claustrum, a thin
and sheet-like structure between insular cortex and putamen has not been
amenable to automatized segmentation, thus limiting its investigation in larger
imaging cohorts. Recently, deep-learning based approaches have been introduced
for automated segmentation of brain structures, yielding great potential to
overcome preexisting limitations. In the following, we present a multi-view
deep-learning based approach to segment the claustrum in T1-weighted MRI scans.
We trained and evaluated the proposed method on 181 manual bilateral claustrum
annotations by an expert neuroradiologist serving as reference standard.
Cross-validation experiments yielded median volumetric similarity, robust
Hausdor? distance and Dice score of 93.3%, 1.41mm and 71.8% respectively which
represents equal or superior segmentation performance compared to human
intra-rater reliability. Leave-one-scanner-out evaluation showed good
transfer-ability of the algorithm to images from unseen scanners, however at
slightly inferior performance. Furthermore, we found that AI-based claustrum
segmentation benefits from multi-view information and requires sample sizes of
around 75 MRI scans in the training set. In conclusion, the developed algorithm
has large potential in independent study cohorts and to facilitate MRI-based
research of the human claustrum through automated segmentation. The software
and models of our method are made publicly available.Comment: submitted to a journa
ICAM-reg: Interpretable Classification and Regression with Feature Attribution for Mapping Neurological Phenotypes in Individual Scans
An important goal of medical imaging is to be able to precisely detect
patterns of disease specific to individual scans; however, this is challenged
in brain imaging by the degree of heterogeneity of shape and appearance.
Traditional methods, based on image registration to a global template,
historically fail to detect variable features of disease, as they utilise
population-based analyses, suited primarily to studying group-average effects.
In this paper we therefore take advantage of recent developments in generative
deep learning to develop a method for simultaneous classification, or
regression, and feature attribution (FA). Specifically, we explore the use of a
VAE-GAN translation network called ICAM, to explicitly disentangle class
relevant features from background confounds for improved interpretability and
regression of neurological phenotypes. We validate our method on the tasks of
Mini-Mental State Examination (MMSE) cognitive test score prediction for the
Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort, as well as brain age
prediction, for both neurodevelopment and neurodegeneration, using the
developing Human Connectome Project (dHCP) and UK Biobank datasets. We show
that the generated FA maps can be used to explain outlier predictions and
demonstrate that the inclusion of a regression module improves the
disentanglement of the latent space. Our code is freely available on Github
https://github.com/CherBass/ICAM
- …