745 research outputs found

    Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies

    Get PDF
    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation

    Automatic Optimum Atlas Selection for Multi-Atlas Image Segmentation using Joint Label Fusion

    Full text link
    Multi-atlas image segmentation using label fusion is one of the most accurate state of the art image segmentation techniques available for biomedical imaging applications. Motivated to achieve higher image segmentation accuracy, reduce computational costs and a continuously increasing atlas data size, a robust framework for optimum selection of atlases for label fusion is vital. Although believed not to be critical for weighted label fusion techniques by some works (Sabuncu, M. R. et al., 2010, [1]), others have shown that appropriate atlas selection has several merits and can improve multi-atlas image segmentation accuracy (Aljabar et al., 2009, [2], Van de Velde et al., 2016) [27]. This thesis proposed an automatic Optimum Atlas Selection (OAS) framework pre-label fusion step that improved image segmentation performance dice similarity scores using Joint Label Fusion (JLF) implementation by Wang et al, 2013, [3, 26]. A selection criterion based on a global majority voting fusion output image similarity comparison score was employed to select an optimum number of atlases out of all available atlases to perform the label fusion step. The OAS framework led to observed significant improvement in aphasia stroke heads magnetic resonance (MR) images segmentation accuracy in leave-one out validation tests by 1.79% (p = 0.005520) and 0.5% (p = 0.000656) utilizing a set of 7 homogenous stroke and 19 inhomogeneous atlas datasets respectively. Further, using comparatively limited atlas data size (19 atlases) composed of normal and stroke head MR images, t-tests showed no statistical significant difference in image segmentation performance dice scores using the proposed OAS protocol compared to using known automatic Statistical Parametric Mapping (SPM) plus a touchup algorithm protocol [4] for image segmentation (p = 0.49417). Thus, leading to the conclusions that the proposed OAS framework is an effective and suitable atlas selection protocol for multi-atlas image segmentation that improves brain MR image segmentation accuracy. It is comparably in performance to known image segmentation algorithms and can lead to reduced computation costs in large atlas data sets. With regards to future work, efforts to increase atlas data size and use of a more robust approach for determining the optimum selection threshold value and corresponding number of atlases to perform label fusion process can be explored to enhance overall image segmentation accuracy. Furthermore, for an unbiased performance comparison of the proposed OAS framework to other image segmentation algorithms, truly manually segmented atlas ground truth MR images and labels are needed

    Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging

    Full text link
    Many analyses of neuroimaging data involve studying one or more regions of interest (ROIs) in a brain image. In order to do so, each ROI must first be identified. Since every brain is unique, the location, size, and shape of each ROI varies across subjects. Thus, each ROI in a brain image must either be manually identified or (semi-) automatically delineated, a task referred to as segmentation. Automatic segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each ROI is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms either employ voting procedures or impose prior structure and subsequently find the maximum a posteriori estimator (i.e., the posterior mode) through optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. We discuss the implementation of our model via Markov chain Monte Carlo and illustrate the procedure through both simulation and application to segmentation of the hippocampus, an anatomical structure known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure

    Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    Get PDF
    AbstractQuantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data

    Automatic segmentation of the hippocampus for preterm neonates from early-in-life to term-equivalent age.

    Get PDF
    INTRODUCTION: The hippocampus, a medial temporal lobe structure central to learning and memory, is particularly vulnerable in preterm-born neonates. To date, segmentation of the hippocampus for preterm-born neonates has not yet been performed early-in-life (shortly after birth when clinically stable). The present study focuses on the development and validation of an automatic segmentation protocol that is based on the MAGeT-Brain (Multiple Automatically Generated Templates) algorithm to delineate the hippocampi of preterm neonates on their brain MRIs acquired at not only term-equivalent age but also early-in-life. METHODS: First, we present a three-step manual segmentation protocol to delineate the hippocampus for preterm neonates and apply this protocol on 22 early-in-life and 22 term images. These manual segmentations are considered the gold standard in assessing the automatic segmentations. MAGeT-Brain, automatic hippocampal segmentation pipeline, requires only a small number of input atlases and reduces the registration and resampling errors by employing an intermediate template library. We assess the segmentation accuracy of MAGeT-Brain in three validation studies, evaluate the hippocampal growth from early-in-life to term-equivalent age, and study the effect of preterm birth on the hippocampal volume. The first experiment thoroughly validates MAGeT-Brain segmentation in three sets of 10-fold Monte Carlo cross-validation (MCCV) analyses with 187 different groups of input atlases and templates. The second experiment segments the neonatal hippocampi on 168 early-in-life and 154 term images and evaluates the hippocampal growth rate of 125 infants from early-in-life to term-equivalent age. The third experiment analyzes the effect of gestational age (GA) at birth on the average hippocampal volume at early-in-life and term-equivalent age using linear regression. RESULTS: The final segmentations demonstrate that MAGeT-Brain consistently provides accurate segmentations in comparison to manually derived gold standards (mean Dice\u27s Kappa \u3e 0.79 and Euclidean distance CONCLUSIONS: MAGeT-Brain is capable of segmenting hippocampi accurately in preterm neonates, even at early-in-life. Hippocampal asymmetry with a larger right side is demonstrated on early-in-life images, suggesting that this phenomenon has its onset in the 3rd trimester of gestation. Hippocampal volume assessed at the time of early-in-life and term-equivalent age is linearly associated with GA at birth, whereby smaller volumes are associated with earlier birth

    Manual-protocol inspired technique for improving automated MR image segmentation during label fusion

    Get PDF
    Recent advances in multi-atlas based algorithms address many of the previous limitations in model-based and probabilistic segmentation methods. However, at the label fusion stage, a majority of algorithms focus primarily on optimizing weight-maps associated with the atlas library based on a theoretical objective function that approximates the segmentation error. In contrast, we propose a novel method-Autocorrecting Walks over Localized Markov Random Fields (AWoL-MRF)-that aims at mimicking the sequential process of manual segmentation, which is the gold-standard for virtually all the segmentation methods. AWoL-MRF begins with a set of candidate labels generated by a multi-atlas segmentation pipeline as an initial label distribution and refines low confidence regions based on a localized Markov random field (L-MRF) model using a novel sequential inference process (walks). We show that AWoL-MRF produces state-of-the-art results with superior accuracy and robustness with a small atlas library compared to existing methods. We validate the proposed approach by performing hippocampal segmentations on three independent datasets: (1) Alzheimer\u27s Disease Neuroimaging Database (ADNI); (2) First Episode Psychosis patient cohort; and (3) A cohort of preterm neonates scanned early in life and at term-equivalent age. We assess the improvement in the performance qualitatively as well as quantitatively by comparing AWoL-MRF with majority vote, STAPLE, and Joint Label Fusion methods. AWoL-MRF reaches a maximum accuracy of 0.881 (dataset 1), 0.897 (dataset 2), and 0.807 (dataset 3) based on Dice similarity coefficient metric, offering significant performance improvements with a smaller atlas library (\u3c 10) over compared methods. We also evaluate the diagnostic utility of AWoL-MRF by analyzing the volume differences per disease category in the ADNI1: Complete Screening dataset. We have made the source code for AWoL-MRF public at: https://github.com/CobraLab/AWoL-MRF
    • …
    corecore