1,860 research outputs found
Coupled non-parametric shape and moment-based inter-shape pose priors for multiple basal ganglia structure segmentation
This paper presents a new active contour-based, statistical method for simultaneous volumetric segmentation of multiple subcortical structures in the brain. In biological tissues, such as the human brain, neighboring structures exhibit co-dependencies which can aid in segmentation, if properly analyzed and modeled. Motivated by this observation, we formulate the segmentation problem as a maximum a posteriori estimation problem, in which we incorporate statistical prior models on the shapes and inter-shape (relative) poses of the structures of interest. This provides a principled mechanism to bring high level information about the shapes and the relationships of anatomical structures into the segmentation problem. For learning the prior densities we use a nonparametric multivariate kernel density estimation framework. We combine these priors with data in a variational framework and develop an active contour-based iterative segmentation algorithm.
We test our method on the problem of volumetric segmentation of basal ganglia structures in magnetic resonance (MR) images.
We present a set of 2D and 3D experiments as well as a quantitative performance analysis. In addition, we perform a comparison to several existent segmentation methods and demonstrate the improvements provided by our approach in terms of segmentation accuracy
Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review
The medical image analysis field has traditionally been focused on the
development of organ-, and disease-specific methods. Recently, the interest in
the development of more 20 comprehensive computational anatomical models has
grown, leading to the creation of multi-organ models. Multi-organ approaches,
unlike traditional organ-specific strategies, incorporate inter-organ relations
into the model, thus leading to a more accurate representation of the complex
human anatomy. Inter-organ relations are not only spatial, but also functional
and physiological. Over the years, the strategies 25 proposed to efficiently
model multi-organ structures have evolved from the simple global modeling, to
more sophisticated approaches such as sequential, hierarchical, or machine
learning-based models. In this paper, we present a review of the state of the
art on multi-organ analysis and associated computation anatomy methodology. The
manuscript follows a methodology-based classification of the different
techniques 30 available for the analysis of multi-organs and multi-anatomical
structures, from techniques using point distribution models to the most recent
deep learning-based approaches. With more than 300 papers included in this
review, we reflect on the trends and challenges of the field of computational
anatomy, the particularities of each anatomical region, and the potential of
multi-organ analysis to increase the impact of 35 medical imaging applications
on the future of healthcare.Comment: Paper under revie
Segmentation-by-Detection: A Cascade Network for Volumetric Medical Image Segmentation
We propose an attention mechanism for 3D medical image segmentation. The
method, named segmentation-by-detection, is a cascade of a detection module
followed by a segmentation module. The detection module enables a region of
interest to come to attention and produces a set of object region candidates
which are further used as an attention model. Rather than dealing with the
entire volume, the segmentation module distills the information from the
potential region. This scheme is an efficient solution for volumetric data as
it reduces the influence of the surrounding noise which is especially important
for medical data with low signal-to-noise ratio. Experimental results on 3D
ultrasound data of the femoral head shows superiority of the proposed method
when compared with a standard fully convolutional network like the U-Net
Brain MR Image Segmentation: From Multi-Atlas Method To Deep Learning Models
Quantitative analysis of the brain structures on magnetic resonance (MR) images plays a crucial role in examining brain development and abnormality, as well as in aiding the treatment planning. Although manual delineation is commonly considered as the gold standard, it suffers from the shortcomings in terms of low efficiency and inter-rater variability. Therefore, developing automatic anatomical segmentation of human brain is of importance in providing a tool for quantitative analysis (e.g., volume measurement, shape analysis, cortical surface mapping). Despite a large number of existing techniques, the automatic segmentation of brain MR images remains a challenging task due to the complexity of the brain anatomical structures and the great inter- and intra-individual variability among these anatomical structures. To address the existing challenges, four methods are proposed in this thesis. The first work proposes a novel label fusion scheme for the multi-atlas segmentation. A two-stage majority voting scheme is developed to address the over-segmentation problem in the hippocampus segmentation of brain MR images. The second work of the thesis develops a supervoxel graphical model for the whole brain segmentation, in order to relieve the dependencies on complicated pairwise registration for the multi-atlas segmentation methods. Based on the assumption that pixels within a supervoxel are supposed to have the same label, the proposed method converts the voxel labeling problem to a supervoxel labeling problem which is solved by a maximum-a-posteriori (MAP) inference in Markov random field (MRF) defined on supervoxels. The third work incorporates attention mechanism into convolutional neural networks (CNN), aiming at learning the spatial dependencies between the shallow layers and the deep layers in CNN and producing an aggregation of the attended local feature and high-level features to obtain more precise segmentation results. The fourth method takes advantage of the success of CNN in computer vision, combines the strength of the graphical model with CNN, and integrates them into an end-to-end training network. The proposed methods are evaluated on public MR image datasets, such as MICCAI2012, LPBA40, and IBSR. Extensive experiments demonstrate the effectiveness and superior performance of the three proposed methods compared with the other state-of-the-art methods
Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images
We present a novel kernel regression framework for smoothing scalar surface
data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel
constructed from the eigenfunctions, we formulate a new bivariate kernel
regression framework as a weighted eigenfunction expansion with the heat kernel
as the weights. The new kernel regression is mathematically equivalent to
isotropic heat diffusion, kernel smoothing and recently popular diffusion
wavelets. Unlike many previous partial differential equation based approaches
involving diffusion, our approach represents the solution of diffusion
analytically, reducing numerical inaccuracy and slow convergence. The numerical
implementation is validated on a unit sphere using spherical harmonics. As an
illustration, we have applied the method in characterizing the localized growth
pattern of mandible surfaces obtained in CT images from subjects between ages 0
and 20 years by regressing the length of displacement vectors with respect to
the template surface.Comment: Accepted in Medical Image Analysi
- …