561 research outputs found

    Atlas encoding by randomized forests for efficient label propagation

    Get PDF
    Abstract We propose a method for multi-atlas label propagation based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This negatively affects the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). At test time, each AF yields a probabilistic label estimate, and fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, incorporation of new scans is possible without retraining, and target-specific selection of atlases remains possible. The evaluation on three different databases shows accuracy at the level of the state of the art, at a significantly lower runtime

    Efficient extraction of semantic information from medical images in large datasets using random forests

    No full text
    Large datasets of unlabelled medical images are increasingly becoming available; however only a small subset tend to be manually semantically labelled as it is a tedious and extremely time-consuming task to do for large datasets. This thesis aims to tackle the problem of efficiently extracting semantic information in the form of image segmentations and organ localisations from large datasets of unlabelled medical images. To do so, we investigate the suitability of supervoxels and random classification forests for the task. The first contribution of this thesis is a novel method for efficiently estimating coarse correspondences between pairs of images that can handle difficult cases that exhibit large variations in fields of view. The proposed methods adapts the random forest framework, which is a supervised learning algorithm, to work in an unsupervised manner by automatically generating labels for training via the use of supervoxels. The second contribution of this thesis is a method that extends our first contribution so as to be applicable efficiently on a large dataset of images. The proposed method is efficient and can be used to obtain correspondences between a large number of object-like supervoxels that are representative of organ structures in the images. The method is evaluated for the applications of organ-based image retrieval and weakly-supervised image segmentation using extremely minimal user input. While the method does not achieve image segmentation accuracies for all organs in an abdominal CT dataset compared to current fully-supervised state-of-the-art methods, it does provide a promising way for efficiently extracting and parsing a large dataset of medical images for the purpose of further processing.Open Acces

    Concatenated spatially-localized random forests for hippocampus labeling in adult and infant MR brain images

    Get PDF
    Automatic labeling of the hippocampus in brain MR images is highly demanded, as it has played an important role in imaging-based brain studies. However, accurate labeling of the hippocampus is still challenging, partially due to the ambiguous intensity boundary between the hippocampus and surrounding anatomies. In this paper, we propose a concatenated set of spatially-localized random forests for multi-atlas-based hippocampus labeling of adult/infant brain MR images. The contribution in our work is two-fold. First, each forest classifier is trained to label just a specific sub-region of the hippocampus, thus enhancing the labeling accuracy. Second, a novel forest selection strategy is proposed, such that each voxel in the test image can automatically select a set of optimal forests, and then dynamically fuses their respective outputs for determining the final label. Furthermore, we enhance the spatially-localized random forests with the aid of the auto-context strategy. In this way, our proposed learning framework can gradually refine the tentative labeling result for better performance. Experiments show that, regarding the large datasets of both adult and infant brain MR images, our method owns satisfactory scalability by segmenting the hippocampus accurately and efficiently

    Brain segmentation based on multi-atlas guided 3D fully convolutional network ensembles

    Full text link
    In this study, we proposed and validated a multi-atlas guided 3D fully convolutional network (FCN) ensemble model (M-FCN) for segmenting brain regions of interest (ROIs) from structural magnetic resonance images (MRIs). One major limitation of existing state-of-the-art 3D FCN segmentation models is that they often apply image patches of fixed size throughout training and testing, which may miss some complex tissue appearance patterns of different brain ROIs. To address this limitation, we trained a 3D FCN model for each ROI using patches of adaptive size and embedded outputs of the convolutional layers in the deconvolutional layers to further capture the local and global context patterns. In addition, with an introduction of multi-atlas based guidance in M-FCN, our segmentation was generated by combining the information of images and labels, which is highly robust. To reduce over-fitting of the FCN model on the training data, we adopted an ensemble strategy in the learning procedure. Evaluation was performed on two brain MRI datasets, aiming respectively at segmenting 14 subcortical and ventricular structures and 54 brain ROIs. The segmentation results of the proposed method were compared with those of a state-of-the-art multi-atlas based segmentation method and an existing 3D FCN segmentation model. Our results suggested that the proposed method had a superior segmentation performance

    LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images

    Get PDF
    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy

    3D object classification in baggage computed tomography imagery using randomised clustering forests

    Get PDF
    We investigate the feasibility of a codebook approach for the automated classification of threats in pre-segmented 3D baggage Computed Tomography (CT) security imagery. We compare the performance of five codebook models, using various combinations of sampling strategies, feature encoding techniques and classifiers, to the current state-of-the-art 3D visual cortex approach [1]. We demonstrate an improvement over the state-of-the-art both in terms of accuracy as well as processing time using a codebook constructed via randomised clustering forests [2], a dense feature sampling strategy and an SVM classifier. Correct classification rates in excess of 98% and false positive rates of less than 1%, in conjunction with a reduction of several orders of magnitude in processing time, make the proposed approach an attractive option for the automated classification of threats in security screening settings
    corecore