109 research outputs found

    Effective Brain Tumor Classification Using Deep Residual Network-Based Transfer Learning

    Get PDF
    Brain tumor classification is an essential task in medical image processing that provides assistance to doctors for accurate diagnoses and treatment plans. A Deep Residual Network based Transfer Learning to a fully convoluted Convolutional Neural Network (CNN) is proposed to perform brain tumor classification of Magnetic Resonance Images (MRI) from the BRATS 2020 dataset. The dataset consists of a variety of pre-operative MRI scans to segment integrally varied brain tumors in appearance, shape, and histology, namely gliomas. A Deep Residual Network (ResNet-50) to a fully convoluted CNN is proposed to perform tumor classification from MRI of the BRATS dataset. The 50-layered residual network deeply convolutes the multi-category of tumor images in classification tasks using convolution block and identity block. Limitations such as Limited accuracy and complexity of algorithms in CNN-based ME-Net, and classification issues in YOLOv2 inceptions are resolved by the proposed model in this work. The trained CNN learns boundary and region tasks and extracts successful contextual information from MRI scans with minimal computation cost. The tumor segmentation and classification are performed in one step using a U-Net architecture, which helps retain spatial features of the image. The multimodality fusion is implemented to perform classification and regression tasks by integrating dataset information. The dice scores of the proposed model for Enhanced Tumor (ET), Whole Tumor (WT), and Tumor Core (TC) are 0.88, 0.97, and 0.90 on the BRATS 2020 dataset, and also resulted in 99.94% accuracy, 98.92% sensitivity, 98.63% specificity, and 99.94% precision

    Brain Tumor Classification in MRI Images Using En-CNN

    Get PDF
    Brain tumors are among the most common diseases of the central nervous system and are harmful. Early diagnosis is essential for patient proper treatment. Radiologists need an automated system to identify brain tumor images successfully. The identification process is often a tedious and error-prone task. Furthermore, brain tumor binary classification is often characterized by malignant and benign because it involves multi-sequence MRI (T1, T2, T1CE, and FLAIR), making radiologist's work quite challenging. Recently, several classification methods based on deep learning are being used to classify brain tumors. Each model's performance is highly dependent on the CNN architecture used. Due to the complexity of the existing CNN architecture, hyperparameter tuning becomes a problem in its application. We propose a CNN method called en-CNN to overcome this problem. This method is based on VGG-16 that consists of seven convolutional networks, four ReLU, and four max-pooling. The proposed method is used to facilitate the hyperparameter tuning. We also proposed a new approach in which the classification of brain tumors is done directly without priorly doing the segmentation process. The new approach consists of the following stages: preprocessing, image augmentation, and applying the en-CNN method. Our new approach is also doing the classification using four MRI sequences of T1, T1CE, T2, and FLAIR. The proposed method delivers accuracy on the MRI multi-sequence BraTS 2018 dataset with an accuracy of 95.5% for T1, 95.5% for T1CE, 94% for T2, and 97% for FLAIR with mini-batch size 128 and epoch 200 using ADAM optimizer. The accuracy was 4% higher than previous research in the same dataset

    Case Studies on X-Ray Imaging, MRI and Nuclear Imaging

    Full text link
    The field of medical imaging is an essential aspect of the medical sciences, involving various forms of radiation to capture images of the internal tissues and organs of the body. These images provide vital information for clinical diagnosis, and in this chapter, we will explore the use of X-ray, MRI, and nuclear imaging in detecting severe illnesses. However, manual evaluation and storage of these images can be a challenging and time-consuming process. To address this issue, artificial intelligence (AI)-based techniques, particularly deep learning (DL), have become increasingly popular for systematic feature extraction and classification from imaging modalities, thereby aiding doctors in making rapid and accurate diagnoses. In this review study, we will focus on how AI-based approaches, particularly the use of Convolutional Neural Networks (CNN), can assist in disease detection through medical imaging technology. CNN is a commonly used approach for image analysis due to its ability to extract features from raw input images, and as such, will be the primary area of discussion in this study. Therefore, we have considered CNN as our discussion area in this study to diagnose ailments using medical imaging technology.Comment: 14 pages, 3 figures, 4 tables; Acceptance of the chapter for the Springer book "Data-driven approaches to medical imaging

    A review on a deep learning perspective in brain cancer classification

    Get PDF
    AWorld Health Organization (WHO) Feb 2018 report has recently shown that mortality rate due to brain or central nervous system (CNS) cancer is the highest in the Asian continent. It is of critical importance that cancer be detected earlier so that many of these lives can be saved. Cancer grading is an important aspect for targeted therapy. As cancer diagnosis is highly invasive, time consuming and expensive, there is an immediate requirement to develop a non-invasive, cost-effective and efficient tools for brain cancer characterization and grade estimation. Brain scans using magnetic resonance imaging (MRI), computed tomography (CT), as well as other imaging modalities, are fast and safer methods for tumor detection. In this paper, we tried to summarize the pathophysiology of brain cancer, imaging modalities of brain cancer and automatic computer assisted methods for brain cancer characterization in a machine and deep learning paradigm. Another objective of this paper is to find the current issues in existing engineering methods and also project a future paradigm. Further, we have highlighted the relationship between brain cancer and other brain disorders like stroke, Alzheimer’s, Parkinson’s, andWilson’s disease, leukoriaosis, and other neurological disorders in the context of machine learning and the deep learning paradigm

    Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks

    Get PDF
    Brain tumors are a pernicious cancer with one of the lowest five-year survival rates. Neurologists often use magnetic resonance imaging (MRI) to diagnose the type of brain tumor. Automated computer-assisted tools can help them speed up the diagnosis process and reduce the burden on the health care systems. Recent advances in deep learning for medical imaging have shown remarkable results, especially in the automatic and instant diagnosis of various cancers. However, we need a large amount of data (images) to train the deep learning models in order to obtain good results. Large public datasets are rare in medicine. This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed framework: variational autoencoders (VAEs) and generative adversarial networks (GANs). We swap the encoder–decoder network after initially training it on the training set of available MR images. The output of this swapped network is a noise vector that has information of the image manifold, and the cascaded generative adversarial network samples the input from this informative noise vector instead of random Gaussian noise. The proposed method helps the GAN to avoid mode collapse and generate realistic-looking brain tumor magnetic resonance images. These artificially generated images could solve the limitation of small medical datasets up to a reasonable extent and help the deep learning models perform acceptably. We used the ResNet50 as a classifier, and the artificially generated brain tumor images are used to augment the real and available images during the classifier training. We compared the classification results with several existing studies and state-of-the-art machine learning models. Our proposed methodology noticeably achieved better results. By using brain tumor images generated artificially by our proposed method, the classification average accuracy improved from 72.63% to 96.25%. For the most severe class of brain tumor, glioma, we achieved 0.769, 0.837, 0.833, and 0.80 values for recall, specificity, precision, and F1-score, respectively. The proposed generative model framework could be used to generate medical images in any domain, including PET (positron emission tomography) and MRI scans of various parts of the body, and the results show that it could be a useful clinical tool for medical experts

    Cross-modality Guidance-aided Multi-modal Learning with Dual Attention for MRI Brain Tumor Grading

    Full text link
    Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly. Accurate identification of the type and grade of tumor in the early stages plays an important role in choosing a precise treatment plan. The Magnetic Resonance Imaging (MRI) protocols of different sequences provide clinicians with important contradictory information to identify tumor regions. However, manual assessment is time-consuming and error-prone due to big amount of data and the diversity of brain tumor types. Hence, there is an unmet need for MRI automated brain tumor diagnosis. We observe that the predictive capability of uni-modality models is limited and their performance varies widely across modalities, and the commonly used modality fusion methods would introduce potential noise, which results in significant performance degradation. To overcome these challenges, we propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading. To balance the tradeoff between model efficiency and efficacy, we employ ResNet Mix Convolution as the backbone network for feature extraction. Besides, dual attention is applied to capture the semantic interdependencies in spatial and slice dimensions respectively. To facilitate information interaction among modalities, we design a cross-modality guidance-aided module where the primary modality guides the other secondary modalities during the process of training, which can effectively leverage the complementary information of different MRI modalities and meanwhile alleviate the impact of the possible noise

    Brain Tumor Diagnosis Support System: A decision Fusion Framework

    Get PDF
    An important factor in providing effective and efficient therapy for brain tumors is early and accurate detection, which can increase survival rates. Current image-based tumor detection and diagnosis techniques are heavily dependent on interpretation by neuro-specialists and/or radiologists, making the evaluation process time-consuming and prone to human error and subjectivity. Besides, widespread use of MR spectroscopy requires specialized processing and assessment of the data and obvious and fast show of the results as photos or maps for routine medical interpretative of an exam. Automatic brain tumor detection and classification have the potential to offer greater efficiency and predictions that are more accurate. However, the performance accuracy of automatic detection and classification techniques tends to be dependent on the specific image modality and is well known to vary from technique to technique. For this reason, it would be prudent to examine the variations in the execution of these methods to obtain consistently high levels of achievement accuracy. Designing, implementing, and evaluating categorization software is the goal of the suggested framework for discerning various brain tumor types on magnetic resonance imaging (MRI) using textural features. This thesis introduces a brain tumor detection support system that involves the use of a variety of tumor classifiers. The system is designed as a decision fusion framework that enables these multi-classifier to analyze medical images, such as those obtained from magnetic resonance imaging (MRI). The fusion procedure is ground on the Dempster-Shafer evidence fusion theory. Numerous experimental scenarios have been implemented to validate the efficiency of the proposed framework. Compared with alternative approaches, the outcomes show that the methodology developed in this thesis demonstrates higher accuracy and higher computational efficiency

    Radiomic Features to Predict Overall Survival Time for Patients with Glioblastoma Brain Tumors Based on Machine Learning and Deep Learning Methods

    Full text link
    Machine Learning (ML) methods including Deep Learning (DL) Methods have been employed in the medical field to improve diagnosis process and patient’s prognosis outcomes. Glioblastoma multiforme is an extremely aggressive Glioma brain tumor that has a poor survival rate. Understanding the behavior of the Glioblastoma brain tumor is still uncertain and some factors are still unrecognized. In fact, the tumor behavior is important to decide a proper treatment plan and to improve a patient’s health. The aim of this dissertation is to develop a Computer-Aided-Diagnosis system (CADiag) based on ML/DL methods to automatically estimate the Overall Survival Time (OST) for patients with Glioblastoma brain tumors from medical imaging and non-imaging data. This system is developed to enhance and speed-up the diagnosis process, as well as to increase understanding of the behavior of Glioblastoma brain tumors. The proposed OST prediction system is developed based on a classification process to categorize a GBM patient into one of the following three survival time groups: short-term (months), mid-term (10-15 months), and long-term (\u3e15 months). The Brain Tumor Segmentation challenge (BraTS) dataset is used to develop the automatic OST prediction system. This dataset consists of multimodal preoperative Magnetic Resonance Imaging (mpMRI) data, and clinical data. The training data is relatively small in size to train an accurate OST prediction model based on DL method. Therefore, traditional ML methods such as Support Vector Machine (SVM), Neural Network, K-Nearest Neighbor (KNN), Decision Tree (DT) were used to develop the OST prediction model for GBM patients. The main contributions in the perspective of ML field include: developing and evaluating five novel radiomic feature extraction methods to produce an automatic and reliable OST prediction system based on classification task. These methods are volumetric, shape, location, texture, histogram-based, and DL features. Some of these radiomic features can be extracted directly from MRI images, such as statistical texture features and histogram-based features. However, preprocessing methods are required to extract automatically other radiomic features from MRI images such as the volume, shape, and location information of the GBM brain tumors. Therefore, a three-dimension (3D) segmentation DL model based on modified U-Net architecture is developed to identify and localize the three glioma brain tumor subregions, peritumoral edematous/invaded tissue (ED), GD-enhancing tumor (ET), and the necrotic tumor core (NCR), in multi MRI scans. The segmentation results are used to calculate the volume, location and shape information of a GBM tumor. Two novel approaches based on volumetric, shape, and location information, are proposed and evaluated in this dissertation. To improve the performance of the OST prediction system, information fusion strategies based on data-fusion, features-fusion and decision-fusion are involved. The best prediction model was developed based on feature fusions and ensemble models using NN classifiers. The proposed OST prediction system achieved competitive results in the BraTS 2020 with accuracy 55.2% and 55.1% on the BraTS 2020 validation and test datasets, respectively. In sum, developing automatic CADiag systems based on robust features and ML methods, such as our developed OST prediction system, enhances the diagnosis process in terms of cost, accuracy, and time. Our OST prediction system was evaluated from the perspective of the ML field. In addition, preprocessing steps are essential to improve not only the quality of the features but also boost the performance of the prediction system. To test the effectiveness of our developed OST system in medical decisions, we suggest more evaluations from the perspective of biology and medical decisions, to be then involved in the diagnosis process as a fast, inexpensive and automatic diagnosis method. To improve the performance of our developed OST prediction system, we believe it is required to increase the size of the training data, involve multi-modal data, and/or provide any uncertain or missing information to the data (such as patients\u27 resection statuses, gender, etc.). The DL structure is able to extract numerous meaningful low-level and high-level radiomic features during the training process without any feature type nominations by researchers. We thus believe that DL methods could achieve better predictions than ML methods if large size and proper data is available

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Brain tumor MRI medical images classification with data augmentation by transfer learning of VGG16

    Get PDF
    The ability to estimate conclusions without direct human input in healthcare systems via computer algorithms is known as Artificial intelligence (AI) in healthcare. Deep learning (DL) approaches are already being employed or exploited for healthcare purposes, and in the case of medical images analysis, DL paradigms opened a world of opportunities. This paper describes creating a DL model based on transfer learning of VGG16 that can correctly classify MRI images as either (tumorous) or (non-tumorous). In addition, the model employed data augmentation in order to balance the dataset and increase the number of images. The dataset comes from the brain tumour classification project, which contains publicly available tumorous and non-tumorous images. The result showed that the model performed better with the augmented dataset, with its validation accuracy reaching ~100 %
    • …
    corecore