3,157 research outputs found

    Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    Get PDF
    AbstractQuantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data

    Development of a tool for automatic segmentation of the cerebellum in MR images of children

    Get PDF
    The human cerebellar cortex is a highly foliated structure that supports both motor and complex cognitive functions in humans. Magnetic Resonance Imaging (MRI) is commonly used to explore structural alterations in patients with psychiatric and neurological diseases. The ability to detect regional structural differences in cerebellar lobules may provide valuable insights into disease biology, progression and response to treatment, but has been hampered by the lack of appropriate tools for performing automated structural cerebellar segmentation and morphometry. In this thesis, time intensive manual tracings by an expert neuroanatomist of 16 cerebellar regions on high-resolution T1-weighted MR images of 18 children aged 9-13 years were used to generate the Cape Town Pediatric Cerebellar Atlas (CAPCA18) in the age-appropriate National Institute of Health Pediatric Database (NIHPD) asymmetric template space. An automated pipeline was developed to process the MR images and generate lobule-wise segmentations, as well as a measure of the uncertainty of the label assignments. Validation in an independent group of children with ages similar to those of the children used in the construction of the atlas, yielded spatial overlaps with manual segmentations greater than 70% in all lobules, except lobules VIIb and X. Average spatial overlap of the whole cerebellar cortex was 86%, compared to 78% using the alternative Spatially Unbiased Infra-tentorial Template (SUIT), which was developed using adult images

    Multiatlas segmentation as nonparametric regression

    Get PDF
    pre-printThis paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation.We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator.We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems

    Learning Deep Similarity Metric for 3D MR-TRUS Registration

    Full text link
    Purpose: The fusion of transrectal ultrasound (TRUS) and magnetic resonance (MR) images for guiding targeted prostate biopsy has significantly improved the biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image registration. However, it is very challenging to obtain a robust automatic MR-TRUS registration due to the large appearance difference between the two imaging modalities. The work presented in this paper aims to tackle this problem by addressing two challenges: (i) the definition of a suitable similarity metric and (ii) the determination of a suitable optimization strategy. Methods: This work proposes the use of a deep convolutional neural network to learn a similarity metric for MR-TRUS registration. We also use a composite optimization strategy that explores the solution space in order to search for a suitable initialization for the second-order optimization of the learned metric. Further, a multi-pass approach is used in order to smooth the metric for optimization. Results: The learned similarity metric outperforms the classical mutual information and also the state-of-the-art MIND feature based methods. The results indicate that the overall registration framework has a large capture range. The proposed deep similarity metric based approach obtained a mean TRE of 3.86mm (with an initial TRE of 16mm) for this challenging problem. Conclusion: A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be used in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.Comment: To appear on IJCAR
    • …
    corecore