1,197 research outputs found

    Context Aware Deep Learning for Brain Tumor Segmentation, Subtype Classification, and Survival Prediction Using Radiology Images

    Get PDF
    A brain tumor is an uncontrolled growth of cancerous cells in the brain. Accurate segmentation and classification of tumors are critical for subsequent prognosis and treatment planning. This work proposes context aware deep learning for brain tumor segmentation, subtype classification, and overall survival prediction using structural multimodal magnetic resonance images (mMRI). We first propose a 3D context aware deep learning, that considers uncertainty of tumor location in the radiology mMRI image sub-regions, to obtain tumor segmentation. We then apply a regular 3D convolutional neural network (CNN) on the tumor segments to achieve tumor subtype classification. Finally, we perform survival prediction using a hybrid method of deep learning and machine learning. To evaluate the performance, we apply the proposed methods to the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) dataset for tumor segmentation and overall survival prediction, and to the dataset of the Computational Precision Medicine Radiology-Pathology (CPM-RadPath) Challenge on Brain Tumor Classification 2019 for tumor classification. We also perform an extensive performance evaluation based on popular evaluation metrics, such as Dice score coefficient, Hausdorff distance at percentile 95 (HD95), classification accuracy, and mean square error. The results suggest that the proposed method offers robust tumor segmentation and survival prediction, respectively. Furthermore, the tumor classification results in this work is ranked at second place in the testing phase of the 2019 CPM-RadPath global challenge

    Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI

    Get PDF
    This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction. Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria. Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Current State-of-the-Art of AI Methods Applied to MRI

    Get PDF
    Di Noia, C., Grist, J. T., Riemer, F., Lyasheva, M., Fabozzi, M., Castelli, M., Lodi, R., Tonon, C., Rundo, L., & Zaccagna, F. (2022). Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics, 12(9), 1-16. [2125]. https://doi.org/10.3390/diagnostics12092125Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.publishersversionpublishe

    3D Multimodal Brain Tumor Segmentation and Grading Scheme based on Machine, Deep, and Transfer Learning Approaches

    Get PDF
    Glioma is one of the most common tumors of the brain. The detection and grading of glioma at an early stage is very critical for increasing the survival rate of the patients. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems are essential and important tools that provide more accurate and systematic results to speed up the decision-making process of clinicians. In this paper, we introduce a method consisting of the variations of the machine, deep, and transfer learning approaches for the effective brain tumor (i.e., glioma) segmentation and grading on the multimodal brain tumor segmentation (BRATS) 2020 dataset. We apply popular and efficient 3D U-Net architecture for the brain tumor segmentation phase. We also utilize 23 different combinations of deep feature sets and machine learning/fine-tuned deep learning CNN models based on Xception, IncResNetv2, and EfficientNet by using 4 different feature sets and 6 learning models for the tumor grading phase. The experimental results demonstrate that the proposed method achieves 99.5% accuracy rate for slice-based tumor grading on BraTS 2020 dataset. Moreover, our method is found to have competitive performance with similar recent works

    Diagnosis and Prognosis of Head and Neck Cancer Patients using Artificial Intelligence

    Full text link
    Cancer is one of the most life-threatening diseases worldwide, and head and neck (H&N) cancer is a prevalent type with hundreds of thousands of new cases recorded each year. Clinicians use medical imaging modalities such as computed tomography and positron emission tomography to detect the presence of a tumor, and they combine that information with clinical data for patient prognosis. The process is mostly challenging and time-consuming. Machine learning and deep learning can automate these tasks to help clinicians with highly promising results. This work studies two approaches for H&N tumor segmentation: (i) exploration and comparison of vision transformer (ViT)-based and convolutional neural network-based models; and (ii) proposal of a novel 2D perspective to working with 3D data. Furthermore, this work proposes two new architectures for the prognosis task. An ensemble of several models predicts patient outcomes (which won the HECKTOR 2021 challenge prognosis task), and a ViT-based framework concurrently performs patient outcome prediction and tumor segmentation, which outperforms the ensemble model.Comment: This is Masters thesis work submitted to MBZUA
    corecore