6,164 research outputs found

    Multi-Atlas based Segmentation of Multi-Modal Brain Images

    Get PDF
    Brain image analysis is playing a fundamental role in clinical and population-based epidemiological studies. Several brain disorder studies involve quantitative interpretation of brain scans and particularly require accurate measurement and delineation of tissue volumes in the scans. Automatic segmentation methods have been proposed to provide reliability and accuracy of the labelling as well as performing an automated procedure. Taking advantage of prior information about the brain's anatomy provided by an atlas as a reference model can help simplify the labelling process. The segmentation in the atlas-based approach will be problematic if the atlas and the target image are not accurately aligned, or if the atlas does not appropriately represent the anatomical structure/region. The accuracy of the segmentation can be improved by utilising a group of atlases. Employing multiple atlases brings about considerable issues in segmenting a new subject's brain image. Registering multiple atlases to the target scan and fusing labels from registered atlases, for a population obtained from different modalities, are challenging tasks: image-intensity comparisons may no longer be valid, since image brightness can have highly diff ering meanings in dfferent modalities. The focus is on the problem of multi-modality and methods are designed and developed to deal with this issue specifically in image registration and label fusion. To deal with multi-modal image registration, two independent approaches are followed. First, a similarity measure is proposed based upon comparing the self-similarity of each of the images to be aligned. Second, two methods are proposed to reduce the multi-modal problem to a mono-modal one by constructing representations not relying on the image intensities. Structural representations work on the basis of using un-decimated complex wavelet representation in one method, and modified approach using entropy in the other one. To handle the cross-modality label fusion, a method is proposed to weight atlases based on atlas-target similarity. The atlas-target similarity is measured by scale-based comparison taking advantage of structural features captured from un-decimated complex wavelet coefficients. The proposed methods are assessed using the simulated and real brain data from computed tomography images and different modes of magnetic resonance images. Experimental results reflect the superiority of the proposed methods over the classical and state-of-the art methods

    Segmentation of image ensembles via latent atlases

    Get PDF
    Spatial priors, such as probabilistic atlases, play an important role in MRI segmentation. However, the availability of comprehensive, reliable and suitable manual segmentations for atlas construction is limited. We therefore propose a method for joint segmentation of corresponding regions of interest in a collection of aligned images that does not require labeled training data. Instead, a latent atlas, initialized by at most a single manual segmentation, is inferred from the evolving segmentations of the ensemble. The algorithm is based on probabilistic principles but is solved using partial differential equations (PDEs) and energy minimization criteria. We evaluate the method on two datasets, segmenting subcortical and cortical structures in a multi-subject study and extracting brain tumors in a single-subject multi-modal longitudinal experiment. We compare the segmentation results to manual segmentations, when those exist, and to the results of a state-of-the-art atlas-based segmentation method. The quality of the results supports the latent atlas as a promising alternative when existing atlases are not compatible with the images to be segmented.National Institutes of Health (U.S.) (National Institute for Biomedical Imaging and Bioengineering (U.S.)/National Alliance for Medical Image Computing (U.S.) U54-EB005149)National Institutes of Health (U.S.) (National Center for Research Resources (U.S.)/Neuroimaging Analysis Center (U.S.) P41-RR13218)National Institutes of Health (U.S.) (National Institute of Neurological Disorders and Stroke (U.S.) R01-NS051826)National Institutes of Health (U.S.) (National Center for Research Resources (U.S.)/Biomedical Informatics Research Network U24-RR021382)National Science Foundation (U.S.) (CAREER Award 0642971)German Academy of Sciences Leopoldina (Fellowship LPDS 2009-10)Academy of Finland (Grant 133611

    HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation

    Full text link
    Recently, dense connections have attracted substantial attention in computer vision because they facilitate gradient flow and implicit deep supervision during training. Particularly, DenseNet, which connects each layer to every other layer in a feed-forward fashion, has shown impressive performances in natural image classification tasks. We propose HyperDenseNet, a 3D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path, but also between those across different paths. This contrasts with the existing multi-modal CNN approaches, in which modeling several modalities relies entirely on a single joint layer (or level of abstraction) for fusion, typically either at the input or at the output of the network. Therefore, the proposed network has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation. We report extensive evaluations over two different and highly competitive multi-modal brain tissue segmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing on 6-month infant data and the latter on adult images. HyperDenseNet yielded significant improvements over many state-of-the-art segmentation networks, ranking at the top on both benchmarks. We further provide a comprehensive experimental analysis of features re-use, which confirms the importance of hyper-dense connections in multi-modal representation learning. Our code is publicly available at https://www.github.com/josedolz/HyperDenseNet.Comment: Paper accepted at IEEE TMI in October 2018. Last version of this paper updates the reference to the IEEE TMI paper which compares the submissions to the iSEG 2017 MICCAI Challeng

    Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods

    Get PDF
    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course

    HeMIS: Hetero-Modal Image Segmentation

    Full text link
    We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches.Comment: Accepted as an oral presentation at MICCAI 201

    Keypoint Transfer for Fast Whole-Body Segmentation

    Full text link
    We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin

    Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes

    Full text link
    Image analysis using more than one modality (i.e. multi-modal) has been increasingly applied in the field of biomedical imaging. One of the challenges in performing the multimodal analysis is that there exist multiple schemes for fusing the information from different modalities, where such schemes are application-dependent and lack a unified framework to guide their designs. In this work we firstly propose a conceptual architecture for the image fusion schemes in supervised biomedical image analysis: fusing at the feature level, fusing at the classifier level, and fusing at the decision-making level. Further, motivated by the recent success in applying deep learning for natural image analysis, we implement the three image fusion schemes above based on the Convolutional Neural Network (CNN) with varied structures, and combined into a single framework. The proposed image segmentation framework is capable of analyzing the multi-modality images using different fusing schemes simultaneously. The framework is applied to detect the presence of soft tissue sarcoma from the combination of Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET) images. It is found from the results that while all the fusion schemes outperform the single-modality schemes, fusing at the feature level can generally achieve the best performance in terms of both accuracy and computational cost, but also suffers from the decreased robustness in the presence of large errors in any image modalities.Comment: Zhe Guo and Xiang Li contribute equally to this wor
    • …
    corecore