505 research outputs found
Statistical evaluation of manual segmentation of a diffuse low-grade glioma MRI dataset
International audienceSoftware-based manual segmentation is critical to the supervision of diffuse low-grade glioma patients and to the optimal treatment’s choice. However, manual segmentationbeing time-consuming, it is difficult to include it in the clinicalroutine. An alternative to circumvent the time cost of manualsegmentation could be to share the task among different practitioners, providing it can be reproduced. The goal of our work is to assess diffuse low-grade gliomas’ manual segmentation’s reproducibility on MRI scans, with regard to practitioners, their experience and field of expertise. A panel of 13 experts manually segmented 12 diffuse low-grade glioma clinical MRI datasets using the OSIRIX software. A statistical analysis gave promising results, as the practitioner factor, the medical specialty and the years of experience seem to have no significant impact on the average values of the tumor volume variable
Quantitative analysis with machine learning models for multi-parametric brain imaging data
Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping
Brain Tumor Characterization Using Radiogenomics in Artificial Intelligence Framework
Brain tumor characterization (BTC) is the process of knowing the underlying cause of brain tumors and their characteristics through various approaches such as tumor segmentation, classification, detection, and risk analysis. The substantial brain tumor characterization includes the identification of the molecular signature of various useful genomes whose alteration causes the brain tumor. The radiomics approach uses the radiological image for disease characterization by extracting quantitative radiomics features in the artificial intelligence (AI) environment. However, when considering a higher level of disease characteristics such as genetic information and mutation status, the combined study of “radiomics and genomics” has been considered under the umbrella of “radiogenomics”. Furthermore, AI in a radiogenomics’ environment offers benefits/advantages such as the finalized outcome of personalized treatment and individualized medicine. The proposed study summarizes the brain tumor’s characterization in the prospect of an emerging field of research, i.e., radiomics and radiogenomics in an AI environment, with the help of statistical observation and risk-of-bias (RoB) analysis. The PRISMA search approach was used to find 121 relevant studies for the proposed review using IEEE, Google Scholar, PubMed, MDPI, and Scopus. Our findings indicate that both radiomics and radiogenomics have been successfully applied aggressively to several oncology applications with numerous advantages. Furthermore, under the AI paradigm, both the conventional and deep radiomics features have made an impact on the favorable outcomes of the radiogenomics approach of BTC. Furthermore, risk-of-bias (RoB) analysis offers a better understanding of the architectures with stronger benefits of AI by providing the bias involved in them
Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI
This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction.
Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria.
Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019
MRI-based classification of IDH mutation and 1p/19q codeletion status of gliomas using a 2.5D hybrid multi-task convolutional neural network
Isocitrate dehydrogenase (IDH) mutation and 1p/19q codeletion status are
important prognostic markers for glioma. Currently, they are determined using
invasive procedures. Our goal was to develop artificial intelligence-based
methods to non-invasively determine these molecular alterations from MRI. For
this purpose, pre-operative MRI scans of 2648 patients with gliomas (grade
II-IV) were collected from Washington University School of Medicine (WUSM; n =
835) and publicly available datasets viz. Brain Tumor Segmentation (BraTS; n =
378), LGG 1p/19q (n = 159), Ivy Glioblastoma Atlas Project (Ivy GAP; n = 41),
The Cancer Genome Atlas (TCGA; n = 461), and the Erasmus Glioma Database (EGD;
n = 774). A 2.5D hybrid convolutional neural network was proposed to
simultaneously localize the tumor and classify its molecular status by
leveraging imaging features from MR scans and prior knowledge features from
clinical records and tumor location. The models were tested on one internal
(TCGA) and two external (WUSM and EGD) test sets. For IDH, the best-performing
model achieved areas under the receiver operating characteristic (AUROC) of
0.925, 0.874, 0.933 and areas under the precision-recall curves (AUPRC) of
0.899, 0.702, 0.853 on the internal, WUSM, and EGD test sets, respectively. For
1p/19q, the best model achieved AUROCs of 0.782, 0.754, 0.842, and AUPRCs of
0.588, 0.713, 0.782, on those three data-splits, respectively. The high
accuracy of the model on unseen data showcases its generalization capabilities
and suggests its potential to perform a 'virtual biopsy' for tailoring
treatment planning and overall clinical management of gliomas
Recommended from our members
Dynamic low-level context for the detection of mild traumatic brain injury.
Mild traumatic brain injury (mTBI) appears as low contrast lesions in magnetic resonance (MR) imaging. Standard automated detection approaches cannot detect the subtle changes caused by the lesions. The use of context has become integral for the detection of low contrast objects in images. Context is any information that can be used for object detection but is not directly due to the physical appearance of an object in an image. In this paper, new low-level static and dynamic context features are proposed and integrated into a discriminative voxel-level classifier to improve the detection of mTBI lesions. Visual features, including multiple texture measures, are used to give an initial estimate of a lesion. From the initial estimate novel proximity and directional distance, contextual features are calculated and used as features for another classifier. This feature takes advantage of spatial information given by the initial lesion estimate using only the visual features. Dynamic context is captured by the proposed posterior marginal edge distance context feature, which measures the distance from a hard estimate of the lesion at a previous time point. The approach is validated on a temporal mTBI rat model dataset and shown to have improved dice score and convergence compared to other state-of-the-art approaches. Analysis of feature importance and versatility of the approach on other datasets are also provided
Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-oncology (I3CR-WANO)
Efforts to utilize growing volumes of clinical imaging data to generate tumor
evaluations continue to require significant manual data wrangling owing to the
data heterogeneity. Here, we propose an artificial intelligence-based solution
for the aggregation and processing of multisequence neuro-oncology MRI data to
extract quantitative tumor measurements. Our end-to-end framework i) classifies
MRI sequences using an ensemble classifier, ii) preprocesses the data in a
reproducible manner, iii) delineates tumor tissue subtypes using convolutional
neural networks, and iv) extracts diverse radiomic features. Moreover, it is
robust to missing sequences and adopts an expert-in-the-loop approach, where
the segmentation results may be manually refined by radiologists. Following the
implementation of the framework in Docker containers, it was applied to two
retrospective glioma datasets collected from the Washington University School
of Medicine (WUSM; n = 384) and the M.D. Anderson Cancer Center (MDA; n = 30)
comprising preoperative MRI scans from patients with pathologically confirmed
gliomas. The scan-type classifier yielded an accuracy of over 99%, correctly
identifying sequences from 380/384 and 30/30 sessions from the WUSM and MDA
datasets, respectively. Segmentation performance was quantified using the Dice
Similarity Coefficient between the predicted and expert-refined tumor masks.
Mean Dice scores were 0.882 (0.244) and 0.977 (0.04) for whole tumor
segmentation for WUSM and MDA, respectively. This streamlined framework
automatically curated, processed, and segmented raw MRI data of patients with
varying grades of gliomas, enabling the curation of large-scale neuro-oncology
datasets and demonstrating a high potential for integration as an assistive
tool in clinical practice
An Artificial Intelligence Approach to Tumor Volume Delineation
Postponed access: the file will be accessible after 2023-11-14Masteroppgave for radiograf/bioingeniørRABD395MAMD-HELS
- …