5,059 research outputs found

    Automatic Optimum Atlas Selection for Multi-Atlas Image Segmentation using Joint Label Fusion

    Full text link
    Multi-atlas image segmentation using label fusion is one of the most accurate state of the art image segmentation techniques available for biomedical imaging applications. Motivated to achieve higher image segmentation accuracy, reduce computational costs and a continuously increasing atlas data size, a robust framework for optimum selection of atlases for label fusion is vital. Although believed not to be critical for weighted label fusion techniques by some works (Sabuncu, M. R. et al., 2010, [1]), others have shown that appropriate atlas selection has several merits and can improve multi-atlas image segmentation accuracy (Aljabar et al., 2009, [2], Van de Velde et al., 2016) [27]. This thesis proposed an automatic Optimum Atlas Selection (OAS) framework pre-label fusion step that improved image segmentation performance dice similarity scores using Joint Label Fusion (JLF) implementation by Wang et al, 2013, [3, 26]. A selection criterion based on a global majority voting fusion output image similarity comparison score was employed to select an optimum number of atlases out of all available atlases to perform the label fusion step. The OAS framework led to observed significant improvement in aphasia stroke heads magnetic resonance (MR) images segmentation accuracy in leave-one out validation tests by 1.79% (p = 0.005520) and 0.5% (p = 0.000656) utilizing a set of 7 homogenous stroke and 19 inhomogeneous atlas datasets respectively. Further, using comparatively limited atlas data size (19 atlases) composed of normal and stroke head MR images, t-tests showed no statistical significant difference in image segmentation performance dice scores using the proposed OAS protocol compared to using known automatic Statistical Parametric Mapping (SPM) plus a touchup algorithm protocol [4] for image segmentation (p = 0.49417). Thus, leading to the conclusions that the proposed OAS framework is an effective and suitable atlas selection protocol for multi-atlas image segmentation that improves brain MR image segmentation accuracy. It is comparably in performance to known image segmentation algorithms and can lead to reduced computation costs in large atlas data sets. With regards to future work, efforts to increase atlas data size and use of a more robust approach for determining the optimum selection threshold value and corresponding number of atlases to perform label fusion process can be explored to enhance overall image segmentation accuracy. Furthermore, for an unbiased performance comparison of the proposed OAS framework to other image segmentation algorithms, truly manually segmented atlas ground truth MR images and labels are needed

    Towards image-guided pancreas and biliary endoscopy: Automatic multi-organ segmentation on abdominal CT with dense dilated networks

    Get PDF
    Segmentation of anatomy on abdominal CT enables patient-specific image guidance in clinical endoscopic procedures and in endoscopy training. Because robust interpatient registration of abdominal images is necessary for existing multi-atlas- and statistical-shape-model-based segmentations, but remains challenging, there is a need for automated multi-organ segmentation that does not rely on registration. We present a deep-learning-based algorithm for segmenting the liver, pancreas, stomach, and esophagus using dilated convolution units with dense skip connections and a new spatial prior. The algorithm was evaluated with an 8-fold cross-validation and compared to a joint-label-fusion-based segmentation based on Dice scores and boundary distances. The proposed algorithm yielded more accurate segmentations than the joint-label-fusion-ba sed algorithm for the pancreas (median Dice scores 66 vs 37), stomach (83 vs 72) and esophagus (73 vs 54) and marginally less accurate segmentation for the liver (92 vs 93). We conclude that dilated convolutional networks with dense skip connections can segment the liver, pancreas, stomach and esophagus from abdominal CT without image registration and have the potential to support image-guided navigation in gastrointestinal endoscopy procedures

    Manual-protocol inspired technique for improving automated MR image segmentation during label fusion

    Get PDF
    Recent advances in multi-atlas based algorithms address many of the previous limitations in model-based and probabilistic segmentation methods. However, at the label fusion stage, a majority of algorithms focus primarily on optimizing weight-maps associated with the atlas library based on a theoretical objective function that approximates the segmentation error. In contrast, we propose a novel method-Autocorrecting Walks over Localized Markov Random Fields (AWoL-MRF)-that aims at mimicking the sequential process of manual segmentation, which is the gold-standard for virtually all the segmentation methods. AWoL-MRF begins with a set of candidate labels generated by a multi-atlas segmentation pipeline as an initial label distribution and refines low confidence regions based on a localized Markov random field (L-MRF) model using a novel sequential inference process (walks). We show that AWoL-MRF produces state-of-the-art results with superior accuracy and robustness with a small atlas library compared to existing methods. We validate the proposed approach by performing hippocampal segmentations on three independent datasets: (1) Alzheimer\u27s Disease Neuroimaging Database (ADNI); (2) First Episode Psychosis patient cohort; and (3) A cohort of preterm neonates scanned early in life and at term-equivalent age. We assess the improvement in the performance qualitatively as well as quantitatively by comparing AWoL-MRF with majority vote, STAPLE, and Joint Label Fusion methods. AWoL-MRF reaches a maximum accuracy of 0.881 (dataset 1), 0.897 (dataset 2), and 0.807 (dataset 3) based on Dice similarity coefficient metric, offering significant performance improvements with a smaller atlas library (\u3c 10) over compared methods. We also evaluate the diagnostic utility of AWoL-MRF by analyzing the volume differences per disease category in the ADNI1: Complete Screening dataset. We have made the source code for AWoL-MRF public at: https://github.com/CobraLab/AWoL-MRF

    Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging

    Full text link
    Many analyses of neuroimaging data involve studying one or more regions of interest (ROIs) in a brain image. In order to do so, each ROI must first be identified. Since every brain is unique, the location, size, and shape of each ROI varies across subjects. Thus, each ROI in a brain image must either be manually identified or (semi-) automatically delineated, a task referred to as segmentation. Automatic segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each ROI is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms either employ voting procedures or impose prior structure and subsequently find the maximum a posteriori estimator (i.e., the posterior mode) through optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. We discuss the implementation of our model via Markov chain Monte Carlo and illustrate the procedure through both simulation and application to segmentation of the hippocampus, an anatomical structure known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure
    • …
    corecore