17 research outputs found

    Efficient segmentation and classification of the tumor using improved encoder-decoder architecture in brain MRI images

    Get PDF
    Primary diagnosis of brain tumors is crucial to improve treatment outcomes for patient survival. T1-weighted contrast-enhanced images of Magnetic Resonance Imaging (MRI) provide the most anatomically relevant images. But even with many advancements, day by day in the medical field, assessing tumor shape, size, segmentation, and classification is very difficult as manual segmentation of MRI images with high precision and accuracy is indeed a time-consuming and very challenging task. So newer digital methods like deep learning algorithms are used for tumor diagnosis which may lead to far better results. Deep learning algorithms have significantly upgraded the research in the artificial intelligence field and help in better understanding medical images and their further analysis. The work carried out in this paper presents a fully automatic brain tumor segmentation and classification model with encoder-decoder architecture that is an improvisation of traditional UNet architecture achieved by embedding three variants of ResNet like ResNet 50, ResNet 101, and ResNext 50 with proper hyperparameter tuning. Various data augmentation techniques were used to improve the model performance. The overall performance of the model was tested on a publicly available MRI image dataset containing three common types of tumors. The proposed model performed better in comparison to several other deep learning architectures regarding quality parameters including Dice Similarity Coefficient (DSC) and Mean Intersection over Union (Mean IoU) thereby enhancing the tumor analysis

    Automatic multiclass intramedullary spinal cord tumor segmentation on MRI with deep learning

    Get PDF
    Spinal cord tumors lead to neurological morbidity and mortality. Being able to obtain morphometric quantification (size, location, growth rate) of the tumor, edema, and cavity can result in improved monitoring and treatment planning. Such quantification requires the segmentation of these structures into three separate classes. However, manual segmentation of three-dimensional structures is time consuming, tedious and prone to intra- and inter-rater variability, motivating the development of automated methods. Here, we tailor a model adapted to the spinal cord tumor segmentation task. Data were obtained from 343 patients using gadolinium-enhanced T1-weighted and T2-weighted MRI scans with cervical, thoracic, and/or lumbar coverage. The dataset includes the three most common intramedullary spinal cord tumor types: astrocytomas, ependymomas, and hemangioblastomas. The proposed approach is a cascaded architecture with U-Net-based models that segments tumors in a two-stage process: locate and label. The model first finds the spinal cord and generates bounding box coordinates. The images are cropped according to this output, leading to a reduced field of view, which mitigates class imbalance. The tumor is then segmented. The segmentation of the tumor, cavity, and edema (as a single class) reached 76.7 ± 1.5% of Dice score and the segmentation of tumors alone reached 61.8 ± 4.0% Dice score. The true positive detection rate was above 87% for tumor, edema, and cavity. To the best of our knowledge, this is the first fully automatic deep learning model for spinal cord tumor segmentation. The multiclass segmentation pipeline is available in the Spinal Cord Toolbox (https://spinalcordtoolbox.com/). It can be run with custom data on a regular computer within seconds

    Numerical Simulation and Design of COVID-19 Disease Detection System Based on Improved Computing Techniques

    Get PDF
    The high demand for testing the sickness has led to a lack of resources at emergency clinics as the coronavirus epidemic continues. PC vision-based frameworks can be used to increase the productivity of Coronavirus localization. However, a significant amount of information preparation is needed to create an accurate and reliable model, which is currently impractical given the peculiar nature of the illness. One such model is for differentiating pneumonia cases by using radiographs, and it has achieved sufficiently high exactness to be used on patients. Various models are currently being used inside the medical services sector to order different illnesses. This proposal evaluates the benefit of using motion learning to broaden the presentation of the Coronavirus location model, starting from the premise that there is limited information available for Coronavirus ID. Infections that affect the human lungs include viral pneumonia caused by the coronavirus and other viruses. The World Health Organization (W.H.O.) proclaimed Covid a pandemic in 2020; the sickness originated in China and quickly spread to other countries. Early diagnosis of infected patients aids in saving the patient's life and prevents the infection's further spread. As one of the quickest and least expensive methods for diagnosing the condition, the convolutional neural organization (CNN) model is suggested in this research study to assist in the early detection of the infection using chest X-Beam images. Two convolutional brain organizations (CNN) models were created using two different datasets. The primary model was created for double characterization using one of the datasets that only included pneumonia cases and common chest X-Beam images. The second model made use of the information advanced by the primary model using move learning and was created for three class divisions on chest X-Beam images of cases with the coronavirus, pneumonia, and regular cases

    DeepSeg: Deep Neural Network Framework for Automatic Brain Tumor Segmentation using Magnetic Resonance FLAIR Images

    Full text link
    Purpose: Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid-Attenuated Inversion Recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture; namely DeepSeg for fully automated detection and segmentation of the brain lesion using FLAIR MRI data. Methods: The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full resolution probability map. Based on modified U-Net architecture, different CNN models such as Residual Neural Network (ResNet), Dense Convolutional Network (DenseNet), and NASNet have been utilized in this study. Results: The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of Brain Tumor Segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly. Conclusion: This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open-source and freely available at https://github.com/razeineldin/DeepSeg/.Comment: Accepted to International Journal of Computer Assisted Radiology and Surger

    Fully automated identification of brain abnormality from whole-body FDG-PET imaging using deep learning-based brain extraction and statistical parametric mapping

    Get PDF
    Background The whole brain is often covered in [18F]Fluorodeoxyglucose positron emission tomography ([18F]FDG-PET) in oncology patients, but the covered brain abnormality is typically screened by visual interpretation without quantitative analysis in clinical practice. In this study, we aimed to develop a fully automated quantitative interpretation pipeline of brain volume from an oncology PET image. Method We retrospectively collected 500 oncologic [18F]FDG-PET scans for training and validation of the automated brain extractor. We trained the model for extracting brain volume with two manually drawn bounding boxes on maximal intensity projection images. ResNet-50, a 2-D convolutional neural network (CNN), was used for the model training. The brain volume was automatically extracted using the CNN model and spatially normalized. For validation of the trained model and an application of this automated analytic method, we enrolled 24 subjects with small cell lung cancer (SCLC) and performed voxel-wise two-sample T test for automatic detection of metastatic lesions. Result The deep learning-based brain extractor successfully identified the existence of whole-brain volume, with an accuracy of 98% for the validation set. The performance of extracting the brain measured by the intersection-over-union of 3-D bounding boxes was 72.9 ± 12.5% for the validation set. As an example of the application to automatically identify brain abnormality, this approach successfully identified the metastatic lesions in three of the four cases of SCLC patients with brain metastasis. Conclusion Based on the deep learning-based model, extraction of the brain volume from whole-body PET was successfully performed. We suggest this fully automated approach could be used for the quantitative analysis of brain metabolic patterns to identify abnormalities during clinical interpretation of oncologic PET studies.This research was supported by the National Research Foundation of Korea (NRF-2019R1F1A1061412 and NRF2019K1A3A1A14065446). This work was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 202011A06) and Seoul R&BD Program (BT200151)

    3D Multimodal Brain Tumor Segmentation and Grading Scheme based on Machine, Deep, and Transfer Learning Approaches

    Get PDF
    Glioma is one of the most common tumors of the brain. The detection and grading of glioma at an early stage is very critical for increasing the survival rate of the patients. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems are essential and important tools that provide more accurate and systematic results to speed up the decision-making process of clinicians. In this paper, we introduce a method consisting of the variations of the machine, deep, and transfer learning approaches for the effective brain tumor (i.e., glioma) segmentation and grading on the multimodal brain tumor segmentation (BRATS) 2020 dataset. We apply popular and efficient 3D U-Net architecture for the brain tumor segmentation phase. We also utilize 23 different combinations of deep feature sets and machine learning/fine-tuned deep learning CNN models based on Xception, IncResNetv2, and EfficientNet by using 4 different feature sets and 6 learning models for the tumor grading phase. The experimental results demonstrate that the proposed method achieves 99.5% accuracy rate for slice-based tumor grading on BraTS 2020 dataset. Moreover, our method is found to have competitive performance with similar recent works

    Compression of MRI brain images based on automatic extraction of tumor region

    Get PDF
    In the compression of medical images, region of interest (ROI) based techniques seem to be promising, as they can result in high compression ratios while maintaining the quality of region of diagnostic importance, the ROI, when image is reconstructed. In this article, we propose a set-up for compression of brain magnetic resonance imaging (MRI) images based on automatic extraction of tumor. Our approach is to first separate the tumor, the ROI in our case, from brain image, using support vector machine (SVM) classification and region extraction step. Then, tumor region (ROI) is compressed using Arithmetic coding, a lossless compression technique. The non-tumorous region, non-region of interest (NROI), is compressed using a lossy compression technique formed by a combination of discrete wavelet transform (DWT), set partitioning in hierarchical trees (SPIHT) and arithmetic coding (AC). The classification performance parameters, like, dice coefficient, sensitivity, positive predictive value and accuracy are tabulated. In the case of compression, we report, performance parameters like mean square error and peak signal to noise ratio for a given set of bits per pixel (bpp) values. We found that the compression scheme considered in our setup gives promising results as compared to other schemes

    Overview of convolutional neural networks architectures for brain tumor segmentation

    Get PDF
    Due to the paramount importance of the medical field in the lives of people, researchers and experts exploited advancements in computer techniques to solve many diagnostic and analytical medical problems. Brain tumor diagnosis is one of the most important computational problems that has been studied and focused on. The brain tumor is determined by segmentation of brain images using many techniques based on magnetic resonance imaging (MRI). Brain tumor segmentation methods have been developed since a long time and are still evolving, but the current trend is to use deep convolutional neural networks (CNNs) due to its many breakthroughs and unprecedented results that have been achieved in various applications and their capacity to learn a hierarchy of progressively complicated characteristics from input without requiring manual feature extraction. Considering these unprecedented results, we present this paper as a brief review for main CNNs architecture types used in brain tumor segmentation. Specifically, we focus on researcher works that used the well-known brain tumor segmentation (BraTS) dataset

    SKULL STRIPPING USING GENERATIVE ADVERSARIAL NETWORKS WITH POSITION CORRECTION BY POSTURE ESTIMATION

    Get PDF
    Skull-stripping (SS) from brain magnetic resonance imaging (MRI) data is an essential first step in almost neuroimaging application, automatic diagnosis of Alzheimer’s disease, structure analysis, CBRI system and so on. In this paper, we propose adversarial generative skull stripping method (GASS) for fast, accurate and robust SS. The GASS method learns a limited number of brains ss data and performs fast and accurate SS. In addition, changes in the MRI image due to the posture of the patient during imaging may cause a decrease in the accuracy of SS. To reduce this problem, the GASS method performs SS after applying position correction using posture estimation. GASS achieved the dice index of 96.86% in an evaluation experiment using the ADNI2 dataset of 617 patients. There was not a single case in which the dice index was less than 90%, indicating a high degree of robustness
    corecore