374 research outputs found

    Brain tumour grading in different MRI protocols using SVM on statistical features

    Get PDF
    In this paper a feasibility study of brain MRI dataset classification, using ROIs which have been segmented either manually or through a superpixel based method in conjunction with statistical pattern recognition methods is presented. In our study, 471 extracted ROIs from 21 Brain MRI datasets are used, in order to establish which features distinguish better between three grading classes. Thirty-eight statistical measurements were collected from the ROIs. We found by using the Leave-One-Out method that the combination of the features from the 1st and 2nd order statistics, achieved high classification accuracy in pair-wise grading comparisons

    Automated brain tumour detection and segmentation using superpixel-based extremely randomized trees in FLAIR MRI

    Get PDF
    PURPOSE: We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI). METHODS: The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour. RESULTS: The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48 %, 6 % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09 %, 6 % and 0.88, respectively. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management

    Machine learning assisted DSC-MRI radiomics as a tool for glioma classification by grade and mutation status

    Get PDF
    Background: Combining MRI techniques with machine learning methodology is rapidly gaining attention as a promising method for staging of brain gliomas. This study assesses the diagnostic value of such a framework applied to dynamic susceptibility contrast (DSC)-MRI in classifying treatment-naïve gliomas from a multi-center patients into WHO grades II-IV and across their isocitrate dehydrogenase (IDH) mutation status. Methods: Three hundred thirty-three patients from 6 tertiary centres, diagnosed histologically and molecularly with primary gliomas (IDH-mutant = 151 or IDH-wildtype = 182) were retrospectively identified. Raw DSC-MRI data was post-processed for normalised leakage-corrected relative cerebral blood volume (rCBV) maps. Shape, intensity distribution (histogram) and rotational invariant Haralick texture features over the tumour mask were extracted. Differences in extracted features across glioma grades and mutation status were tested using the Wilcoxon two-sample test. A random-forest algorithm was employed (2-fold cross-validation, 250 repeats) to predict grades or mutation status using the extracted features. Results: Shape, distribution and texture features showed significant differences across mutation status. WHO grade II-III differentiation was mostly driven by shape features while texture and intensity feature were more relevant for the III-IV separation. Increased number of features became significant when differentiating grades further apart from one another. Gliomas were correctly stratified by mutation status in 71% and by grade in 53% of the cases (87% of the gliomas grades predicted with distance less than 1). Conclusions: Despite large heterogeneity in the multi-center dataset, machine learning assisted DSC-MRI radiomics hold potential to address the inherent variability and presents a promising approach for non-invasive glioma molecular subtyping and grading

    Prediction of malignant glioma grades using contrast-enhanced T1-weighted and T2-weighted magnetic resonance images based on a radiomic analysis

    Get PDF
    We conducted a feasibility study to predict malignant glioma grades via radiomic analysis using contrast-enhanced T1-weighted magnetic resonance images (CE-T1WIs) and T2-weighted magnetic resonance images (T2WIs). We proposed a framework and applied it to CE-T1WIs and T2WIs (with tumor region data) acquired preoperatively from 157 patients with malignant glioma (grade III: 55, grade IV: 102) as the primary dataset and 67 patients with malignant glioma (grade III: 22, grade IV: 45) as the validation dataset. Radiomic features such as size/shape, intensity, histogram, and texture features were extracted from the tumor regions on the CE-T1WIs and T2WIs. The Wilcoxon–Mann–Whitney (WMW) test and least absolute shrinkage and selection operator logistic regression (LASSO-LR) were employed to select the radiomic features. Various machine learning (ML) algorithms were used to construct prediction models for the malignant glioma grades using the selected radiomic features. Leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of the prediction models in the primary dataset. The selected radiomic features for all folds in the LOOCV of the primary dataset were used to perform an independent validation. As evaluation indices, accuracies, sensitivities, specificities, and values for the area under receiver operating characteristic curve (or simply the area under the curve (AUC)) for all prediction models were calculated. The mean AUC value for all prediction models constructed by the ML algorithms in the LOOCV of the primary dataset was 0.902 ± 0.024 (95% CI (confidence interval), 0.873–0.932). In the independent validation, the mean AUC value for all prediction models was 0.747 ± 0.034 (95% CI, 0.705–0.790). The results of this study suggest that the malignant glioma grades could be sufficiently and easily predicted by preparing the CE-T1WIs, T2WIs, and tumor delineations for each patient. Our proposed framework may be an effective tool for preoperatively grading malignant gliomas

    Quantitative analysis with machine learning models for multi-parametric brain imaging data

    Get PDF
    Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping

    Assessment of artificial intelligence (AI) reporting methodology in glioma MRI studies using the Checklist for AI in Medical Imaging (CLAIM)

    Get PDF
    Purpose: The Checklist for Artificial Intelligence in Medical Imaging (CLAIM) is a recently released guideline designed for the optimal reporting methodology of artificial intelligence (AI) studies. Gliomas are the most common form of primary malignant brain tumour and numerous outcomes derived from AI algorithms such as grading, survival, treatment-related effects and molecular status have been reported. The aim of the study is to evaluate the AI reporting methodology for outcomes relating to gliomas in magnetic resonance imaging (MRI) using the CLAIM criteria. Methods: A literature search was performed on three databases pertaining to AI augmentation of glioma MRI, published between the start of 2018 and the end of 2021 Results: A total of 4308 articles were identified and 138 articles remained after screening. These articles were categorised into four main AI tasks: grading (n= 44), predicting molecular status (n= 50), predicting survival (n= 25) and distinguishing true tumour progression from treatment-related effects (n= 10). The average CLAIM score was 20/42 (range: 10–31). Studies most consistently reported the scientific background and clinical role of their AI approach. Areas of improvement were identified in the reporting of data collection, data management, ground truth and validation of AI performance. Conclusion: AI may be a means of producing high-accuracy results for certain tasks in glioma MRI; however, there remain issues with reporting quality. AI reporting guidelines may aid in a more reproducible and standardised approach to reporting and will aid in clinical integration

    CT Radiomics in Colorectal Cancer: Detection of KRAS Mutation Using Texture Analysis and Machine Learning

    Get PDF
    In this work, by using descriptive techniques, the characteristics of the texture of the CT (computed tomography) image of patients with colorectal cancer were extracted and, subsequently, classified in KRAS+ or KRAS-. This was accomplished by using different classifiers, such as Support Vector Machine (SVM), Grading Boosting Machine (GBM), Neural Networks (NNET), and Random Forest (RF). Texture analysis can provide a quantitative assessment of tumour heterogeneity by analysing both the distribution and relationship between the pixels in the image. The objective of this research is to demonstrate that CT-based Radiomics can predict the presence of mutation in the KRAS gene in colorectal cancer. This is a retrospective study, with 47 patients from the University Hospital, with a confirmatory pathological analysis of KRAS mutation. The highest accuracy and kappa achieved were 83% and 64.7%, respectively, with a sensitivity of 88.9% and a specificity of 75.0%, achieved by the NNET classifier using the texture feature vectors combining wavelet transform and Haralick coefficients. The fact of being able to identify the genetic expression of a tumour without having to perform either a biopsy or a genetic test is a great advantage, because it prevents invasive procedures that involve complications and may present biases in the sample. As well, it leads towards a more personalized and effective treatmentThis work has received financial support from the Xunta de Galicia (Centro singular de investigación de Galicia, accreditation 2020–2023) and the European Union (European Regional Development Fund—ERDF), Project MTM2016-76969-PS

    Supervised learning-based multimodal MRI brain image analysis

    Get PDF
    Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy
    corecore