1,405 research outputs found
Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI
This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction.
Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria.
Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019
An automatic deep learning-based workflow for glioblastoma survival prediction using pre-operative multimodal MR images
We proposed a fully automatic workflow for glioblastoma (GBM) survival
prediction using deep learning (DL) methods. 285 glioma (210 GBM, 75 low-grade
glioma) patients were included. 163 of the GBM patients had overall survival
(OS) data. Every patient had four pre-operative MR scans and manually drawn
tumor contours. For automatic tumor segmentation, a 3D convolutional neural
network (CNN) was trained and validated using 122 glioma patients. The trained
model was applied to the remaining 163 GBM patients to generate tumor contours.
The handcrafted and DL-based radiomic features were extracted from
auto-contours using explicitly designed algorithms and a pre-trained CNN
respectively. 163 GBM patients were randomly split into training (n=122) and
testing (n=41) sets for survival analysis. Cox regression models with
regularization techniques were trained to construct the handcrafted and
DL-based signatures. The prognostic power of the two signatures was evaluated
and compared. The 3D CNN achieved an average Dice coefficient of 0.85 across
163 GBM patients for tumor segmentation. The handcrafted signature achieved a
C-index of 0.64 (95% CI: 0.55-0.73), while the DL-based signature achieved a
C-index of 0.67 (95% CI: 0.57-0.77). Unlike the handcrafted signature, the
DL-based signature successfully stratified testing patients into two
prognostically distinct groups (p-value<0.01, HR=2.80, 95% CI: 1.26-6.24). The
proposed 3D CNN generated accurate GBM tumor contours from four MR images. The
DL-based signature resulted in better GBM survival prediction, in terms of
higher C-index and significant patient stratification, than the handcrafted
signature. The proposed automatic radiomic workflow demonstrated the potential
of improving patient stratification and survival prediction in GBM patients
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction
Deep learning for regression tasks on medical imaging data has shown
promising results. However, compared to other approaches, their power is
strongly linked to the dataset size. In this study, we evaluate
3D-convolutional neural networks (CNNs) and classical regression methods with
hand-crafted features for survival time regression of patients with high grade
brain tumors. The tested CNNs for regression showed promising but unstable
results. The best performing deep learning approach reached an accuracy of
51.5% on held-out samples of the training set. All tested deep learning
experiments were outperformed by a Support Vector Classifier (SVC) using 30
radiomic features. The investigated features included intensity, shape,
location and deep features. The submitted method to the BraTS 2018 survival
prediction challenge is an ensemble of SVCs, which reached a cross-validated
accuracy of 72.2% on the BraTS 2018 training set, 57.1% on the validation set,
and 42.9% on the testing set. The results suggest that more training data is
necessary for a stable performance of a CNN model for direct regression from
magnetic resonance images, and that non-imaging clinical patient information is
crucial along with imaging information.Comment: Contribution to The International Multimodal Brain Tumor Segmentation
(BraTS) Challenge 2018, survival prediction tas
Glioma Diagnosis Aid through CNNs and Fuzzy-C Means for MRI
Glioma is a type of brain tumor that causes mortality in many cases. Early diagnosis is an important factor.
Typically, it is detected through MRI and then either a treatment is applied, or it is removed through surgery.
Deep-learning techniques are becoming popular in medical applications and image-based diagnosis.
Convolutional Neural Networks are the preferred architecture for object detection and classification in images.
In this paper, we present a study to evaluate the efficiency of using CNNs for diagnosis aids in glioma
detection and the improvement of the method when using a clustering method (Fuzzy C-means) for preprocessing
the input MRI dataset. Results offered an accuracy improvement from 0.77 to 0.81 when using
Fuzzy C-Means.Ministerio de Economía y Competitividad TEC2016-77785-
- …