115 research outputs found
Integrating semi-supervised label propagation and random forests for multi-atlas based hippocampus segmentation
A novel multi-atlas based image segmentation method is proposed by
integrating a semi-supervised label propagation method and a supervised random
forests method in a pattern recognition based label fusion framework. The
semi-supervised label propagation method takes into consideration local and
global image appearance of images to be segmented and segments the images by
propagating reliable segmentation results obtained by the supervised random
forests method. Particularly, the random forests method is used to train a
regression model based on image patches of atlas images for each voxel of the
images to be segmented. The regression model is used to obtain reliable
segmentation results to guide the label propagation for the segmentation. The
proposed method has been compared with state-of-the-art multi-atlas based image
segmentation methods for segmenting the hippocampus in MR images. The
experiment results have demonstrated that our method obtained superior
segmentation performance.Comment: Accepted paper in IEEE International Symposium on Biomedical Imaging
(ISBI), 201
Simultaneous segmentation and grading of anatomical structures for patient's classification: application to Alzheimer's Disease
Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (www.loni.ucla.edu/ADNI).In this paper, we propose an innovative approach to robustly and accurately detect Alzheimer's disease (AD) based on the distinction of specific atrophic patterns of anatomical structures such as hippocampus (HC) and entorhinal cortex (EC). The proposed method simultaneously performs segmentation and grading of structures to efficiently capture the anatomical alterations caused by AD. Known as SNIPE (Scoring by Non-local Image Patch Estimator), the novel proposed grading measure is based on a nonlocal patch-based frame-work and estimates the similarity of the patch surrounding the voxel under study with all the patches present in different training populations. In this study, the training library was composed of two populations: 50 cognitively normal subjects (CN) and 50 patients with AD, randomly selected from the ADNI database. During our experiments, the classification accuracy of patients (CN vs. AD) using several biomarkers was compared: HC and EC volumes, the grade of these structures and finally the combination of their volume and their grade. Tests were completed in a leave-one-out framework using discriminant analysis. First, we showed that biomarkers based on HC provide better classification accuracy than biomarkers based on EC. Second, we demonstrated that structure grading is a more powerful measure than structure volume to distinguish both populations with a classification accuracy of 90%. Finally, by adding the ages of subjects in order to better separate age-related structural changes from disease-related anatomical alterations, SNIPE obtained a classification accuracy of 93%Data collection and sharing for this project were funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904). ADNI is funded by the National Insti- tute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Abbott, AstraZeneca AB, Bayer Schering Pharma AG, Bristol-Myers Squibb, Eisai Global Clinical Development, Elan Corporation, Genentech, GE Healthcare, GlaxoSmithKline, Innogenetics, Johnson and Johnson, Eli Lilly and Co., Medpace, Inc., Merck and Co., Inc., Novartis AG, Pfizer Inc, F. Hoffman-La Roche, Schering-Plough, Synarc, Inc., as well as non-profit partners the Alzheimer's Association and Alzheimer's Drug Discovery Foundation, with participation from the U.S. Food and Drug Administration. Private sector contributions to ADNI are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of California, Los Angeles. This research was also supported by NIH grants P30AG010129, K01 AG030514, and the Dana Foundation.Coupé, P.; Eskildsen, SF.; Manjón Herrera, JV.; Fonov, VS.; Collins, DL.; Alzheimer's Dis Neuroimaging (2012). Simultaneous segmentation and grading of anatomical structures for patient's classification: application to Alzheimer's Disease. NeuroImage. 59(4):3736-3747. https://doi.org/10.1016/j.neuroimage.2011.10.080S3736374759
Patch-based segmentation with spatial context for medical image analysis
Accurate segmentations in medical imaging form a crucial role in many applications from pa-
tient diagnosis to population studies. As the amount of data generated from medical images
increases, the ability to perform this task without human intervention becomes ever more de-
sirable. One approach, known broadly as atlas-based segmentation, is to propagate labels from
images which have already been manually labelled by clinical experts. Methods using this ap-
proach have been shown to be e ective in many applications, demonstrating great potential for
automatic labelling of large datasets. However, these methods usually require the use of image
registration and are dependent on the outcome of the registration. Any registrations errors
that occur are also propagated to the segmentation process and are likely to have an adverse
e ect on segmentation accuracy. Recently, patch-based methods have been shown to allow a
relaxation of the required image alignment, whilst achieving similar results. In general, these
methods label each voxel of a target image by comparing the image patch centred on the voxel
with neighbouring patches from an atlas library and assigning the most likely label according
to the closest matches. The main contributions of this thesis focuses around this approach
in providing accurate segmentation results whilst minimising the dependency on registration
quality. In particular, this thesis proposes a novel kNN patch-based segmentation framework,
which utilises both intensity and spatial information, and explore the use of spatial context in
a diverse range of applications. The proposed methods extend the potential for patch-based
segmentation to tolerate registration errors by rede ning the \locality" for patch selection and
comparison, whilst also allowing similar looking patches from di erent anatomical structures
to be di erentiated. The methods are evaluated on a wide variety of image datasets, ranging
from the brain to the knees, demonstrating its potential with results which are competitive to
state-of-the-art techniques.Open Acces
Progressive multi-atlas label fusion by dictionary evolution
AbstractAccurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary
Hierarchical multi-atlas label fusion with multi-scale feature representation and label-specific patch partition
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods
Brain segmentation based on multi-atlas guided 3D fully convolutional network ensembles
In this study, we proposed and validated a multi-atlas guided 3D fully
convolutional network (FCN) ensemble model (M-FCN) for segmenting brain regions
of interest (ROIs) from structural magnetic resonance images (MRIs). One major
limitation of existing state-of-the-art 3D FCN segmentation models is that they
often apply image patches of fixed size throughout training and testing, which
may miss some complex tissue appearance patterns of different brain ROIs. To
address this limitation, we trained a 3D FCN model for each ROI using patches
of adaptive size and embedded outputs of the convolutional layers in the
deconvolutional layers to further capture the local and global context
patterns. In addition, with an introduction of multi-atlas based guidance in
M-FCN, our segmentation was generated by combining the information of images
and labels, which is highly robust. To reduce over-fitting of the FCN model on
the training data, we adopted an ensemble strategy in the learning procedure.
Evaluation was performed on two brain MRI datasets, aiming respectively at
segmenting 14 subcortical and ventricular structures and 54 brain ROIs. The
segmentation results of the proposed method were compared with those of a
state-of-the-art multi-atlas based segmentation method and an existing 3D FCN
segmentation model. Our results suggested that the proposed method had a
superior segmentation performance
- …