58 research outputs found

    Supervised Nonparametric Image Parcellation

    Get PDF
    Author Manuscript 2010 August 25. 12th International Conference, London, UK, September 20-24, 2009, Proceedings, Part IISegmentation of medical images is commonly formulated as a supervised learning problem, where manually labeled training data are summarized using a parametric atlas. Summarizing the data alleviates the computational burden at the expense of possibly losing valuable information on inter-subject variability. This paper presents a novel framework for Supervised Nonparametric Image Parcellation (SNIP). SNIP models the intensity and label images as samples of a joint distribution estimated from the training data in a non-parametric fashion. By capitalizing on recently developed fast and robust pairwise image alignment tools, SNIP employs the entire training data to segment a new image via Expectation Maximization. The use of multiple registrations increases robustness to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with manual labels for the white matter, cortex and subcortical structures. SNIP yields better segmentation than state-of-the-art algorithms in multiple regions of interest.NAMIC (NIHNIBIBNAMICU54-EB005149)NAC (NIHNCRRNACP41-RR13218)mBIRN (NIHNCRRmBIRNU24-RR021382)NIH NINDS (Grant R01-NS051826)National Science Foundation (U.S.) (CAREER Grant 0642971)NCRR (P41-RR14075)NCRR (R01 RR16594-01A1)NIBIB (R01 EB001550)NIBIB (R01EB006758)NINDS (R01 NS052585-01)Mind Research InstituteEllison Medical FoundationSingapore. Agency for Science, Technology and Researc

    Automated Segmentation of Hippocampal Subfields From Ultra-High Resolution In Vivo MRI

    Get PDF
    Recent developments in MRI data acquisition technology are starting to yield images that show anatomical features of the hippocampal formation at an unprecedented level of detail, providing the basis for hippocampal subfield measurement. However, a fundamental bottleneck in MRI studies of the hippocampus at the subfield level is that they currently depend on manual segmentation, a laborious process that severely limits the amount of data that can be analyzed. In this article, we present a computational method for segmenting the hippocampal subfields in ultra-high resolution MRI data in a fully automated fashion. Using Bayesian inference, we use a statistical model of image formation around the hippocampal area to obtain automated segmentations. We validate the proposed technique by comparing its segmentations to corresponding manual delineations in ultra-high resolution MRI scans of 10 individuals, and show that automated volume measurements of the larger subfields correlate well with manual volume estimates. Unlike manual segmentations, our automated technique is fully reproducible, and fast enough to enable routine analysis of the hippocampal subfields in large imaging studies.National Institutes of Health (U.S.) (NIH NCRR; Grant number: P41-RR14075)National Institutes of Health (U.S.) (Grant R01 RR16594-01A1)National Institutes of Health (U.S.) (Grant NAC P41-RR13218)Biomedical Informatics Research Network (BIRN002)Biomedical Informatics Research Network (U24 RR021382)National Institute of Biomedical Imaging and Bioengineering (U.S.) (R01 EB001550)National Institute of Biomedical Imaging and Bioengineering (U.S.) (R01EB006758)National Institute of Biomedical Imaging and Bioengineering (U.S.) (NAMIC U54-EB005149)National Institute of Neurological Disorders and Stroke (U.S.) (R01 NS052585-01)National Institute of Neurological Disorders and Stroke (U.S.) (R01 NS051826)Mental Illness and Neuroscience Discovery (MIND) InstituteEllison Medical Foundation (Autism & Dyslexia Project

    PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation

    Full text link
    With the advent of convolutional neural networks~(CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1T_1-weighted and T2T_2-weighted contrasts with only T1T_1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results~(overall Dice overlap=0.94=0.94), with a fast run time~(≈\approx 45 seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev

    Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models

    Get PDF
    Segmentation is a fundamental task for extracting semantically meaningful regions from an image. The goal of segmentation algorithms is to accurately assign object labels to each image location. However, image-noise, shortcomings of algorithms, and image ambiguities cause uncertainty in label assignment. Estimating the uncertainty in label assignment is important in multiple application domains, such as segmenting tumors from medical images for radiation treatment planning. One way to estimate these uncertainties is through the computation of posteriors of Bayesian models, which is computationally prohibitive for many practical applications. On the other hand, most computationally efficient methods fail to estimate label uncertainty. We therefore propose in this paper the Active Mean Fields (AMF) approach, a technique based on Bayesian modeling that uses a mean-field approximation to efficiently compute a segmentation and its corresponding uncertainty. Based on a variational formulation, the resulting convex model combines any label-likelihood measure with a prior on the length of the segmentation boundary. A specific implementation of that model is the Chan-Vese segmentation model (CV), in which the binary segmentation task is defined by a Gaussian likelihood and a prior regularizing the length of the segmentation boundary. Furthermore, the Euler-Lagrange equations derived from the AMF model are equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image denoising. Solutions to the AMF model can thus be implemented by directly utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We qualitatively assess the approach on synthetic data as well as on real natural and medical images. For a quantitative evaluation, we apply our approach to the icgbench dataset

    Scalable joint segmentation and registration framework for infant brain images

    Get PDF
    The first year of life is the most dynamic and perhaps the most critical phase of postnatal brain development. The ability to accurately measure structure changes is critical in early brain development study, which highly relies on the performances of image segmentation and registration techniques. However, either infant image segmentation or registration, if deployed independently, encounters much more challenges than segmentation/registration of adult brains due to dynamic appearance change with rapid brain development. In fact, image segmentation and registration of infant images can assists each other to overcome the above challenges by using the growth trajectories (i.e., temporal correspondences) learned from a large set of training subjects with complete longitudinal data. Specifically, a one-year-old image with ground-truth tissue segmentation can be first set as the reference domain. Then, to register the infant image of a new subject at earlier age, we can estimate its tissue probability maps, i.e., with sparse patch-based multi-atlas label fusion technique, where only the training images at the respective age are considered as atlases since they have similar image appearance. Next, these probability maps can be fused as a good initialization to guide the level set segmentation. Thus, image registration between the new infant image and the reference image is free of difficulty of appearance changes, by establishing correspondences upon the reasonably segmented images. Importantly, the segmentation of new infant image can be further enhanced by propagating the much more reliable label fusion heuristics at the reference domain to the corresponding location of the new infant image via the learned growth trajectories, which brings image segmentation and registration to assist each other. It is worth noting that our joint segmentation and registration framework is also flexible to handle the registration of any two infant images even with significant age gap in the first year of life, by linking their joint segmentation and registration through the reference domain. Thus, our proposed joint segmentation and registration method is scalable to various registration tasks in early brain development studies. Promising segmentation and registration results have been achieved for infant brain MR images aged from 2-week-old to 1-year-old, indicating the applicability of our method in early brain development study

    A new label fusion method using graph cuts: application to hippocampus segmentation

    Full text link
    The aim of this paper is to develop a probabilistic modeling framework for the segmentation of structures of interest from a collection of atlases. Given a subset of registered atlases into the target image for a particular Region of Interest (ROI), a statistical model of appearance and shape is computed for fusing the labels. Segmentations are obtained by minimizing an energy function associated with the proposed model, using a graph-cut technique. We test different label fusion methods on publicly available MR images of human brains
    • …
    corecore