16 research outputs found

    A generative probability model of joint label fusion for multi-atlas based brain segmentation

    Get PDF
    Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies

    Robust multi-atlas label propagation by deep sparse representation

    Get PDF
    Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods

    Progressive multi-atlas label fusion by dictionary evolution

    Get PDF
    AbstractAccurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary

    PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation

    Full text link
    With the advent of convolutional neural networks~(CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1T_1-weighted and T2T_2-weighted contrasts with only T1T_1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results~(overall Dice overlap=0.94=0.94), with a fast run time~(\approx 45 seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev

    Concatenated spatially-localized random forests for hippocampus labeling in adult and infant MR brain images

    Get PDF
    Automatic labeling of the hippocampus in brain MR images is highly demanded, as it has played an important role in imaging-based brain studies. However, accurate labeling of the hippocampus is still challenging, partially due to the ambiguous intensity boundary between the hippocampus and surrounding anatomies. In this paper, we propose a concatenated set of spatially-localized random forests for multi-atlas-based hippocampus labeling of adult/infant brain MR images. The contribution in our work is two-fold. First, each forest classifier is trained to label just a specific sub-region of the hippocampus, thus enhancing the labeling accuracy. Second, a novel forest selection strategy is proposed, such that each voxel in the test image can automatically select a set of optimal forests, and then dynamically fuses their respective outputs for determining the final label. Furthermore, we enhance the spatially-localized random forests with the aid of the auto-context strategy. In this way, our proposed learning framework can gradually refine the tentative labeling result for better performance. Experiments show that, regarding the large datasets of both adult and infant brain MR images, our method owns satisfactory scalability by segmenting the hippocampus accurately and efficiently

    Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion

    Get PDF
    We propose a novel multi-atlas based segmentation method to address the segmentation editing scenario, where an incomplete segmentation is given along with a set of existing reference label images (used as atlases). Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate atlas label patches in the reference label set and derive their weights for label fusion. Specifically, user interactions provided on the erroneous parts are first divided into multiple local combinations. For each combination, the atlas label patches well-matched with both interactions and the previous segmentation are identified. Then, the segmentation is updated through the voxel-wise label fusion of selected atlas label patches with their weights derived from the distances of each underlying voxel to the interactions. Since the atlas label patches well-matched with different local combinations are used in the fusion step, our method can consider various local shape variations during the segmentation update, even with only limited atlas label images and user interactions. Besides, since our method does not depend on either image appearance or sophisticated learning steps, it can be easily applied to general editing problems. To demonstrate the generality of our method, we apply it to editing segmentations of CT prostate, CT brainstem, and MR hippocampus, respectively. Experimental results show that our method outperforms existing editing methods in all three data sets

    Automatic Optimum Atlas Selection for Multi-Atlas Image Segmentation using Joint Label Fusion

    Full text link
    Multi-atlas image segmentation using label fusion is one of the most accurate state of the art image segmentation techniques available for biomedical imaging applications. Motivated to achieve higher image segmentation accuracy, reduce computational costs and a continuously increasing atlas data size, a robust framework for optimum selection of atlases for label fusion is vital. Although believed not to be critical for weighted label fusion techniques by some works (Sabuncu, M. R. et al., 2010, [1]), others have shown that appropriate atlas selection has several merits and can improve multi-atlas image segmentation accuracy (Aljabar et al., 2009, [2], Van de Velde et al., 2016) [27]. This thesis proposed an automatic Optimum Atlas Selection (OAS) framework pre-label fusion step that improved image segmentation performance dice similarity scores using Joint Label Fusion (JLF) implementation by Wang et al, 2013, [3, 26]. A selection criterion based on a global majority voting fusion output image similarity comparison score was employed to select an optimum number of atlases out of all available atlases to perform the label fusion step. The OAS framework led to observed significant improvement in aphasia stroke heads magnetic resonance (MR) images segmentation accuracy in leave-one out validation tests by 1.79% (p = 0.005520) and 0.5% (p = 0.000656) utilizing a set of 7 homogenous stroke and 19 inhomogeneous atlas datasets respectively. Further, using comparatively limited atlas data size (19 atlases) composed of normal and stroke head MR images, t-tests showed no statistical significant difference in image segmentation performance dice scores using the proposed OAS protocol compared to using known automatic Statistical Parametric Mapping (SPM) plus a touchup algorithm protocol [4] for image segmentation (p = 0.49417). Thus, leading to the conclusions that the proposed OAS framework is an effective and suitable atlas selection protocol for multi-atlas image segmentation that improves brain MR image segmentation accuracy. It is comparably in performance to known image segmentation algorithms and can lead to reduced computation costs in large atlas data sets. With regards to future work, efforts to increase atlas data size and use of a more robust approach for determining the optimum selection threshold value and corresponding number of atlases to perform label fusion process can be explored to enhance overall image segmentation accuracy. Furthermore, for an unbiased performance comparison of the proposed OAS framework to other image segmentation algorithms, truly manually segmented atlas ground truth MR images and labels are needed

    Multi-Granularity Whole-Brain Segmentation Based Functional Network Analysis Using Resting-State fMRI

    Get PDF
    In this work, we systematically analyzed the effects of various nodal definitions, as determined by a multi-granularity whole-brain segmentation scheme, upon the topological architecture of the human brain functional network using the resting-state functional magnetic resonance imaging data of 19 healthy, young subjects. A number of functional networks were created with their nodes defined according to two types of anatomical definitions (Type I and Type II) each of which consists of five granularity levels of whole brain segmentations with each level linked through ontology-based, hierarchical, structural relationships. Topological properties were computed for each network and then compared across levels within the same segmentation type as well as between Type I and Type II. Certain network architecture patterns were observed in our study: (1) As the granularity changes, the absolute values of each node's nodal degree and nodal betweenness change accordingly but the relative values within a single network do not change considerably; (2) The average nodal degree is generally affected by the sparsity level of the network whereas the other topological properties are more specifically affected by the nodal definitions; (3) Within the same ontology relationship type, as the granularity decreases, the network becomes more efficient at information propagation; (4) The small-worldness that we observe is an intrinsic property of the brain's resting-state functional network, independent of the ontology type and the granularity level. Furthermore, we validated the aforementioned conclusions and measured the reproducibility of this multi-granularity network analysis pipeline using another dataset of 49 healthy young subjects that had been scanned twice

    Multi-Atlas Segmentation of Biomedical Images: A Survey

    Get PDF
    Abstract Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing
    corecore