1,540 research outputs found
Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
We consider the problem of segmenting a biomedical image into anatomical
regions of interest. We specifically address the frequent scenario where we
have no paired training data that contains images and their manual
segmentations. Instead, we employ unpaired segmentation images to build an
anatomical prior. Critically these segmentations can be derived from imaging
data from a different dataset and imaging modality than the current task. We
introduce a generative probabilistic model that employs the learned prior
through a convolutional neural network to compute segmentations in an
unsupervised setting. We conducted an empirical analysis of the proposed
approach in the context of structural brain MRI segmentation, using a
multi-study dataset of more than 14,000 scans. Our results show that an
anatomical prior can enable fast unsupervised segmentation which is typically
not possible using standard convolutional networks. The integration of
anatomical priors can facilitate CNN-based anatomical segmentation in a range
of novel clinical problems, where few or no annotations are available and thus
standard networks are not trainable. The code is freely available at
http://github.com/adalca/neuron.Comment: Presented at CVPR 2018. IEEE CVPR proceedings pp. 9290-929
Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging
Many analyses of neuroimaging data involve studying one or more regions of
interest (ROIs) in a brain image. In order to do so, each ROI must first be
identified. Since every brain is unique, the location, size, and shape of each
ROI varies across subjects. Thus, each ROI in a brain image must either be
manually identified or (semi-) automatically delineated, a task referred to as
segmentation. Automatic segmentation often involves mapping a previously
manually segmented image to a new brain image and propagating the labels to
obtain an estimate of where each ROI is located in the new image. A more recent
approach to this problem is to propagate labels from multiple manually
segmented atlases and combine the results using a process known as label
fusion. To date, most label fusion algorithms either employ voting procedures
or impose prior structure and subsequently find the maximum a posteriori
estimator (i.e., the posterior mode) through optimization. We propose using a
fully Bayesian spatial regression model for label fusion that facilitates
direct incorporation of covariate information while making accessible the
entire posterior distribution. We discuss the implementation of our model via
Markov chain Monte Carlo and illustrate the procedure through both simulation
and application to segmentation of the hippocampus, an anatomical structure
known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach
Deep learning approaches have achieved state-of-the-art performance in
cardiac magnetic resonance (CMR) image segmentation. However, most approaches
have focused on learning image intensity features for segmentation, whereas the
incorporation of anatomical shape priors has received less attention. In this
paper, we combine a multi-task deep learning approach with atlas propagation to
develop a shape-constrained bi-ventricular segmentation pipeline for short-axis
CMR volumetric images. The pipeline first employs a fully convolutional network
(FCN) that learns segmentation and landmark localisation tasks simultaneously.
The architecture of the proposed FCN uses a 2.5D representation, thus combining
the computational advantage of 2D FCNs networks and the capability of
addressing 3D spatial consistency without compromising segmentation accuracy.
Moreover, the refinement step is designed to explicitly enforce a shape
constraint and improve segmentation quality. This step is effective for
overcoming image artefacts (e.g. due to different breath-hold positions and
large slice thickness), which preclude the creation of anatomically meaningful
3D cardiac shapes. The proposed pipeline is fully automated, due to network's
ability to infer landmarks, which are then used downstream in the pipeline to
initialise atlas propagation. We validate the pipeline on 1831 healthy subjects
and 649 subjects with pulmonary hypertension. Extensive numerical experiments
on the two datasets demonstrate that our proposed method is robust and capable
of producing accurate, high-resolution and anatomically smooth bi-ventricular
3D models, despite the artefacts in input CMR volumes
Robust whole-brain segmentation: Application to traumatic brain injury
We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called "Multi-Atlas Label Propagation with Expectation-Maximisation based refinement" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.This work was partially funded under the 7th Framework Programme by the European Commission (http://cordis.europa.eu/ist/, TBIcare: http://www.tbicare.eu/, last accessed: 8 December 2014). The research was further supported by the National Institute for Health Research (NIHR) Biomedical Research Centre (BRC) based at Imperial College Healthcare NHS Trust and Imperial College London. AH is supported by the Department of Health via the NIHR comprehensive BRC award to Guy’s & St Thomas’ NHS Foundation Trust in partnership with King’s College London and Kings College Hospital NHS Foundation Trust. This work was further supported by a Medical Research Council (UK) Program Grant (Acute brain injury: heterogeneity of mechanisms, therapeutic targets and outcome effects [G9439390 ID 65883]), the UK National Institute of Health Research Biomedical Research Centre at Cambridge, the Technology Platform funding provided by the UK Department of Health and an EPSRC Pathways to Impact award. VFJN is supported by a Health Foundation/Academy of Medical Sciences Clinician Scientist Fellowship. DKM is supported by an NIHR Senior Investigator Award. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. The funders had no role in study design, data collection and analyses, decision to publish, or preparation of the manuscript
Registration of low-SNR high-resolution diffusion-weighted images
This paper introduces a novel, high-speed scheme for intrasubject registration and segmentation of high-resolution multi-shot diffusion-weighted images.
Compared to single-shot sequences, multi-shot have advantages in terms of improved spatial resolution and reduced eddy-current and susceptibility artifacts.
However, these sequences have prolonged scan times increasing the risk of subject motion, and, a lower signal to noise ratio (SNR) with smaller voxel volumes.
The proposed registration algorithm comprises a hybrid thresholding expectation-maximization segmentation method that can cope with the low-SNR, and registers diffusion-weighted to B0 images through fast detection and matching
of features found in edge images derived from floating and reference images.
We performed validations of the entire pipeline, including assessment of visual appearance by experts, consistency error computations, and analysis of the segmentation, using volunteer images, and found its performance to be comparable
with, or exceeding, that of established solutions
- …