90 research outputs found
Automated brain tumour detection and segmentation using superpixel-based extremely randomized trees in FLAIR MRI
PURPOSE: We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI). METHODS: The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour. RESULTS: The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48Â %, 6Â % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09Â %, 6Â % and 0.88, respectively. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management
Automated brain tumour identification using magnetic resonance imaging:a systematic review and meta-analysis
BACKGROUND: Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. METHODS: A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. RESULTS: Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had âgoodâ (DSC â„ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. CONCLUSIONS: The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models
Supervised learning-based multimodal MRI brain image analysis
Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images.
In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour.
The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the
III
single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively.
The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy
A hybrid method for traumatic brain injury lesion segmentation
Traumatic brain injuries are significant effects of disability and loss of life. Physicians employ computed tomography (CT) images to observe the trauma and measure its severity for diagnosis and treatment. Due to the overlap of hemorrhage and normal brain tissues, segmentation methods sometimes lead to false results. The study is more challenging to unitize the AI field to collect brain hemorrhage by involving patient datasets employing CT scans images. We propose a novel technique free-form object model for brain injury CT image segmentation based on superpixel image processing that uses CT to analyzing brain injuries, quite challenging to create a high outstanding simple linear iterative clustering (SLIC) method. The maintains a strategic distance of the segmentation image to reduced intensity boundaries. The segmentation image contains marked red hemorrhage to modify the free-form object model. The contour labelled by the red mark is the output from our free-form object model. We proposed a hybrid image segmentation approach based on the combined edge detection and dilation technique features. The approach diminishes computational costs, and the show accomplished 96.68% accuracy. The segmenting brain hemorrhage images are achieved in the clustered region to construct a free-form object model. The study also presents further directions on future research in this domain
Recommended from our members
Development of computer-based algorithms for unsupervised assessment of radiotherapy contouring
INTRODUCTION: Despite the advances in radiotherapy treatment delivery, target volume
delineation remains one of the greatest sources of error in the radiotherapy delivery process,
which can lead to poor tumour control probability and impact clinical outcome. Contouring
assessments are performed to ensure high quality of target volume definition in clinical trials
but this can be subjective and labour-intensive.
This project addresses the hypothesis that computational segmentation techniques, with a given
prior, can be used to develop an image-based tumour delineation process for contour
assessments. This thesis focuses on the exploration of the segmentation techniques to develop
an automated method for generating reference delineations in the setting of advanced lung
cancer. The novelty of this project is in the use of the initial clinician outline as a prior for
image segmentation.
METHODS: Automated segmentation processes were developed for stage II and III non-small
cell lung cancer using the IDEAL-CRT clinical trial dataset. Marker-controlled watershed
segmentation, two active contour approaches (edge- and region-based) and graph-cut applied
on superpixels were explored. k-nearest neighbour (k-NN) classification of tumour from
normal tissues based on texture features was also investigated.
RESULTS: 63 cases were used for development and training. Segmentation and classification
performance were evaluated on an independent test set of 16 cases. Edge-based active contour
segmentation achieved highest Dice similarity coefficient of 0.80 ± 0.06, followed by graphcut
at 0.76 ± 0.06, watershed at 0.72 ± 0.08 and region-based active contour at 0.71 ± 0.07,
with mean computational times of 192 ± 102 sec, 834 ± 438 sec, 21 ± 5 sec and 45 ± 18 sec
per case respectively. Errors in accuracy of irregularly shaped lesions and segmentation
leakages at the mediastinum were observed.
In the distinction of tumour and non-tumour regions, misclassification errors of 14.5% and
15.5% were achieved using 16- and 8-pixel regions of interest (ROIs) respectively. Higher
misclassification errors of 24.7% and 26.9% for 16- and 8-pixel ROIs were obtained in the
analysis of the tumour boundary.
CONCLUSIONS: Conventional image-based segmentation techniques with the application of
priors are useful in automatic segmentation of tumours, although further developments are
required to improve their performance. Texture classification can be useful in distinguishing
tumour from non-tumour tissue, but the segmentation task at the tumour boundary is more
difficult. Future work with deep-learning segmentation approaches need to be explored.Funded by National Radiotherapy Trials Quality Assurance (RTTQA) grou
Brain Tumor Segmentation from Multi-Spectral MR Image Data Using Random Forest Classifier
The development of brain tumor segmentation techniques based on multi-spectral MR image data has relevant impact on the clinical practice via better diagnosis, radiotherapy planning and follow-up studies. This task is also very challenging due to the great variety of tumor appearances, the presence of several noise effects, and the differences in scanner sensitivity. This paper proposes an automatic procedure trained to distinguish gliomas from normal brain tissues in multi-spectral
MRI data. The procedure is based on a random forest (RF) classifier, which uses 80 computed features beside the four observed ones, including morphological ones, gradients, and Gabor wavelet features. The intermediary segmentation outcome provided by the RF is fed to a twofold post-processing, which regularizes the shape of detected tumors and enhances the segmentation accuracy. The performance of the procedure was evaluated using the 274 records of the BraTS 2015 train data set. The
achieved overall Dice scores between 85-86% represent highly accurate segmentation
Brain Tumor Segmentation from Multi-Spectral Magnetic Resonance Image Data Using an Ensemble Learning Approach
The automatic segmentation of medical images represents a research domain of high interest. This paper proposes an automatic procedure for the detection and segmentation
of gliomas from multi-spectral MRI data. The procedure
is based on a machine learning approach: it uses ensembles of binary decision trees trained to distinguish pixels belonging to gliomas to those that represent normal tissues. The classification employs 100 computed features beside the four observed ones, including morphological, gradients and Gabor wavelet features.
The output of the decision ensemble is fed to morphological and structural post-processing, which regularize the shape of the detected tumors and improve the segmentation quality. The proposed procedure was evaluated using the BraTS 2015 train data, both the high-grade (HG) and the low-grade (LG) glioma records. The highest overall Dice scores achieved were 86.5% for HG and 84.6% for LG glioma volumes
Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation
Rectal tumour segmentation in dynamic contrast-enhanced MRI (DCE-MRI) is a challenging task, and an automated and consistent method would be highly desirable to improve the modelling and prediction of
patient outcomes from tissue contrast enhancement characteristics â particularly in routine clinical practice. A framework is developed to automate DCE-MRI tumour segmentation, by introducing: perfusion-supervoxels to over-segment and classify DCE-MRI volumes using the dynamic contrast enhancement characteristics; and the pieces-of-parts graphical model, which adds global (anatomic) constraints that
further refine the supervoxel components that comprise the tumour. The framework was evaluated on 23 DCE-MRI scans of patients with rectal adenocarcinomas, and achieved a voxelwise area-under the receiver operating characteristic curve (AUC) of 0.97 compared to expert delineations. Creating a binary tumour segmentation, 21 of the 23 cases were segmented correctly with a median Dice similarity coefficient (DSC) of 0.63, which is close to the inter-rater variability of this challenging task. A second study is also included to demonstrate the methodâs generalisability and achieved a DSC of 0.71. The framework achieves promising results for the underexplored area of rectal tumour segmentation in DCE-MRI, and the methods have potential to be applied to other DCE-MRI and supervoxel
segmentation problems
Segmentation of pelvic structures from preoperative images for surgical planning and guidance
Prostate cancer is one of the most frequently diagnosed malignancies globally and the second leading cause of cancer-related mortality in males in the developed world. In recent decades, many techniques have been proposed for prostate cancer diagnosis and treatment. With the development of imaging technologies such as CT and MRI, image-guided procedures have become increasingly important as a means to improve clinical outcomes. Analysis of the preoperative images and construction of 3D models prior to treatment would help doctors to better localize and visualize the structures of interest, plan the procedure, diagnose disease and guide the surgery or therapy. This requires efficient and robust medical image analysis and segmentation technologies to be developed.
The thesis mainly focuses on the development of segmentation techniques in pelvic MRI for image-guided robotic-assisted laparoscopic radical prostatectomy and external-beam radiation therapy. A fully automated multi-atlas framework is proposed for bony pelvis segmentation in MRI, using the guidance of MRI AE-SDM. With the guidance of the AE-SDM, a multi-atlas segmentation algorithm is used to delineate the bony pelvis in a new \ac{MRI} where there is no CT available. The proposed technique outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. With the SDM of pelvis and its segmented surface, an accurate 3D pelvimetry system is designed and implemented to measure a comprehensive set of pelvic geometric parameters for the examination of the relationship between these parameters and the difficulty of robotic-assisted laparoscopic radical prostatectomy. This system can be used in both manual and automated manner with a user-friendly interface.
A fully automated and robust multi-atlas based segmentation has also been developed to delineate the prostate in diagnostic MR scans, which have large variation in both intensity and shape of prostate. Two image analysis techniques are proposed, including patch-based label fusion with local appearance-specific atlases and multi-atlas propagation via a manifold graph on a database of both labeled and unlabeled images when limited labeled atlases are available. The proposed techniques can achieve more robust and accurate segmentation results than other multi-atlas based methods.
The seminal vesicles are also an interesting structure for therapy planning, particularly for external-beam radiation therapy. As existing methods fail for the very onerous task of segmenting the seminal vesicles, a multi-atlas learning framework via random decision forests with graph cuts refinement has further been proposed to solve this difficult problem. Motivated by the performance of this technique, I further extend the multi-atlas learning to segment the prostate fully automatically using multispectral (T1 and T2-weighted) MR images via hybrid \ac{RF} classifiers and a multi-image graph cuts technique. The proposed method compares favorably to the previously proposed multi-atlas based prostate segmentation.
The work in this thesis covers different techniques for pelvic image segmentation in MRI. These techniques have been continually developed and refined, and their application to different specific problems shows ever more promising results.Open Acces
Image classification-based brain tumour tissue segmentation
Brain tumour tissue segmentation is essential for clinical decision making. While manual segmentation is time consuming, tedious, and subjective, it is very challenging to develop automatic segmentation methods. Deep learning with convolutional neural network (CNN) architecture has consistently outperformed previous methods on such challenging tasks. However, the local dependencies of pixel classes cannot be fully reflected in the CNN models. In contrast, hand-crafted features such as histogram-based texture features provide robust feature descriptors of local pixel dependencies. In this paper, a classification-based method for automatic brain tumour tissue segmentation is proposed using combined CNN-based and hand-crafted features. The CIFAR network is modified to extract CNN-based features, and histogram-based texture features are fused to compensate the limitation in the CIFAR network. These features together with the pixel intensities of the original MRI images are sent to a decision tree for classifying the MRI image voxels into different types of tumour tissues. The method is evaluated on the BraTS 2017 dataset. Experiments show that the proposed method produces promising segmentation results
- âŠ