39 research outputs found

    Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels

    Get PDF
    BACKGROUND: Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. METHODS: We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. RESULTS: The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. CONCLUSION: The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management

    Supervised learning-based multimodal MRI brain image analysis

    Get PDF
    Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy

    The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    Get PDF
    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low-and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

    Investigating the impact of supervoxel segmentation for unsupervised abnormal brain asymmetry detection

    Get PDF
    Several brain disorders are associated with abnormal brain asymmetries (asymmetric anomalies). Several computer-based methods aim to detect such anomalies automatically. Recent advances in this area use automatic unsupervised techniques that extract pairs of symmetric supervoxels in the hemispheres, model normal brain asymmetries for each pair from healthy subjects, and treat outliers as anomalies. Yet, there is no deep understanding of the impact of the supervoxel segmentation quality for abnormal asymmetry detection, especially for small anomalies, nor of the added value of using a specialized model for each supervoxel pair instead of a single global appearance model. We aim to answer these questions by a detailed evaluation of different scenarios for supervoxel segmentation and classification for detecting abnormal brain asymmetries. Experimental results on 3D MR-T1 brain images of stroke patients confirm the importance of high-quality supervoxels fit anomalies and the use of a specific classifier for each supervoxel. Next, we present a refinement of the detection method that reduces the number of false-positive supervoxels, thereby making the detection method easier to use for visual inspection and analysis of the found anomalies.</p

    Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation

    Get PDF
    Rectal tumour segmentation in dynamic contrast-enhanced MRI (DCE-MRI) is a challenging task, and an automated and consistent method would be highly desirable to improve the modelling and prediction of patient outcomes from tissue contrast enhancement characteristics – particularly in routine clinical practice. A framework is developed to automate DCE-MRI tumour segmentation, by introducing: perfusion-supervoxels to over-segment and classify DCE-MRI volumes using the dynamic contrast enhancement characteristics; and the pieces-of-parts graphical model, which adds global (anatomic) constraints that further refine the supervoxel components that comprise the tumour. The framework was evaluated on 23 DCE-MRI scans of patients with rectal adenocarcinomas, and achieved a voxelwise area-under the receiver operating characteristic curve (AUC) of 0.97 compared to expert delineations. Creating a binary tumour segmentation, 21 of the 23 cases were segmented correctly with a median Dice similarity coefficient (DSC) of 0.63, which is close to the inter-rater variability of this challenging task. A second study is also included to demonstrate the method’s generalisability and achieved a DSC of 0.71. The framework achieves promising results for the underexplored area of rectal tumour segmentation in DCE-MRI, and the methods have potential to be applied to other DCE-MRI and supervoxel segmentation problems

    Proceedings of the MICCAI Challenge on Multimodal Brain Tumor Image Segmentation (BRATS) 2013

    Get PDF
    International audienceBecause of their unpredictable appearance and shape, segmenting brain tumors from multi-modal imaging data is one of the most challenging tasks in medical image analysis. Although many different segmentation strategies have been proposed in the literature, it is hard to compare existing methods because the validation datasets that are used differ widely in terms of input data (structural MR contrasts; perfusion or diffusion data; ...), the type of lesion (primary or secondary tumors; solid or infiltratively growing), and the state of the disease (pre- or post-treatment). In order to gauge the current state-of-the-art in automated brain tumor segmentation and compare between different methods, we are organizing a Multimodal Brain Tumor Image Segmentation (BRATS) challenge that is held in conjunction with the 16th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2013) on September 22nd, 2013 in Nagoya, Japan

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Automated brain tumour identification using magnetic resonance imaging:a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. METHODS: A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. RESULTS: Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had “good” (DSC ≥ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. CONCLUSIONS: The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models

    Advanced deep learning for medical image segmentation:Towards global and data-efficient learning

    Get PDF

    Advanced deep learning for medical image segmentation:Towards global and data-efficient learning

    Get PDF
    corecore