313 research outputs found

    Automated atlas-based segmentation of brain structures in MR images

    Get PDF

    Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies

    Get PDF
    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation

    Automated Extraction of Biomarkers for Alzheimer's Disease from Brain Magnetic Resonance Images

    No full text
    In this work, different techniques for the automated extraction of biomarkers for Alzheimer's disease (AD) from brain magnetic resonance imaging (MRI) are proposed. The described work forms part of PredictAD (www.predictad.eu), a joined European research project aiming at the identification of a unified biomarker for AD combining different clinical and imaging measurements. Two different approaches are followed in this thesis towards the extraction of MRI-based biomarkers: (I) the extraction of traditional morphological biomarkers based on neuronatomical structures and (II) the extraction of data-driven biomarkers applying machine-learning techniques. A novel method for a unified and automated estimation of structural volumes and volume changes is proposed. Furthermore, a new technique that allows the low-dimensional representation of a high-dimensional image population for data analysis and visualization is described. All presented methods are evaluated on images from the Alzheimer's Disease Neuroimaging Initiative (ADNI), providing a large and diverse clinical database. A rigorous evaluation of the power of all identified biomarkers to discriminate between clinical subject groups is presented. In addition, the agreement of automatically derived volumes with reference labels as well as the power of the proposed method to measure changes in a subject's atrophy rate are assessed. The proposed methods compare favorably to state-of-the art techniques in neuroimaging in terms of accuracy, robustness and run-time

    Automated atlas-based segmentation of brain structures in MR images

    Get PDF

    Patch-based segmentation with spatial context for medical image analysis

    Get PDF
    Accurate segmentations in medical imaging form a crucial role in many applications from pa- tient diagnosis to population studies. As the amount of data generated from medical images increases, the ability to perform this task without human intervention becomes ever more de- sirable. One approach, known broadly as atlas-based segmentation, is to propagate labels from images which have already been manually labelled by clinical experts. Methods using this ap- proach have been shown to be e ective in many applications, demonstrating great potential for automatic labelling of large datasets. However, these methods usually require the use of image registration and are dependent on the outcome of the registration. Any registrations errors that occur are also propagated to the segmentation process and are likely to have an adverse e ect on segmentation accuracy. Recently, patch-based methods have been shown to allow a relaxation of the required image alignment, whilst achieving similar results. In general, these methods label each voxel of a target image by comparing the image patch centred on the voxel with neighbouring patches from an atlas library and assigning the most likely label according to the closest matches. The main contributions of this thesis focuses around this approach in providing accurate segmentation results whilst minimising the dependency on registration quality. In particular, this thesis proposes a novel kNN patch-based segmentation framework, which utilises both intensity and spatial information, and explore the use of spatial context in a diverse range of applications. The proposed methods extend the potential for patch-based segmentation to tolerate registration errors by rede ning the \locality" for patch selection and comparison, whilst also allowing similar looking patches from di erent anatomical structures to be di erentiated. The methods are evaluated on a wide variety of image datasets, ranging from the brain to the knees, demonstrating its potential with results which are competitive to state-of-the-art techniques.Open Acces

    Concatenated spatially-localized random forests for hippocampus labeling in adult and infant MR brain images

    Get PDF
    Automatic labeling of the hippocampus in brain MR images is highly demanded, as it has played an important role in imaging-based brain studies. However, accurate labeling of the hippocampus is still challenging, partially due to the ambiguous intensity boundary between the hippocampus and surrounding anatomies. In this paper, we propose a concatenated set of spatially-localized random forests for multi-atlas-based hippocampus labeling of adult/infant brain MR images. The contribution in our work is two-fold. First, each forest classifier is trained to label just a specific sub-region of the hippocampus, thus enhancing the labeling accuracy. Second, a novel forest selection strategy is proposed, such that each voxel in the test image can automatically select a set of optimal forests, and then dynamically fuses their respective outputs for determining the final label. Furthermore, we enhance the spatially-localized random forests with the aid of the auto-context strategy. In this way, our proposed learning framework can gradually refine the tentative labeling result for better performance. Experiments show that, regarding the large datasets of both adult and infant brain MR images, our method owns satisfactory scalability by segmenting the hippocampus accurately and efficiently

    Automated Atlas-Based Segmentation of Brain Structures in MR Images: Application to a Population-Based Imaging Study

    Get PDF
    The final type of segmentationmethod is atlas-based segmentation (sometimes also called label propagation). In this approach, additional knowledge is introduced through an atlas image, in which an expert has labeled the brain structures of interest. The atlas is first registered to the target image, and the resulting transformation is then used to deform the atlas labels to the coordinate system of the target image. During registration the similarity between the warped atlas image and the target image is maximized, while at the same time the deformation is constrained to ensure that the spatial information of the atlas is maintained

    Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion

    Get PDF
    We propose a novel multi-atlas based segmentation method to address the segmentation editing scenario, where an incomplete segmentation is given along with a set of existing reference label images (used as atlases). Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate atlas label patches in the reference label set and derive their weights for label fusion. Specifically, user interactions provided on the erroneous parts are first divided into multiple local combinations. For each combination, the atlas label patches well-matched with both interactions and the previous segmentation are identified. Then, the segmentation is updated through the voxel-wise label fusion of selected atlas label patches with their weights derived from the distances of each underlying voxel to the interactions. Since the atlas label patches well-matched with different local combinations are used in the fusion step, our method can consider various local shape variations during the segmentation update, even with only limited atlas label images and user interactions. Besides, since our method does not depend on either image appearance or sophisticated learning steps, it can be easily applied to general editing problems. To demonstrate the generality of our method, we apply it to editing segmentations of CT prostate, CT brainstem, and MR hippocampus, respectively. Experimental results show that our method outperforms existing editing methods in all three data sets
    • …
    corecore