1,038 research outputs found
Inception Modules Enhance Brain Tumor Segmentation.
Magnetic resonance images of brain tumors are routinely used in neuro-oncology clinics for diagnosis, treatment planning, and post-treatment tumor surveillance. Currently, physicians spend considerable time manually delineating different structures of the brain. Spatial and structural variations, as well as intensity inhomogeneity across images, make the problem of computer-assisted segmentation very challenging. We propose a new image segmentation framework for tumor delineation that benefits from two state-of-the-art machine learning architectures in computer vision, i.e., Inception modules and U-Net image segmentation architecture. Furthermore, our framework includes two learning regimes, i.e., learning to segment intra-tumoral structures (necrotic and non-enhancing tumor core, peritumoral edema, and enhancing tumor) or learning to segment glioma sub-regions (whole tumor, tumor core, and enhancing tumor). These learning regimes are incorporated into a newly proposed loss function which is based on the Dice similarity coefficient (DSC). In our experiments, we quantified the impact of introducing the Inception modules in the U-Net architecture, as well as, changing the objective function for the learning algorithm from segmenting the intra-tumoral structures to glioma sub-regions. We found that incorporating Inception modules significantly improved the segmentation performance (p \u3c 0.001) for all glioma sub-regions. Moreover, in architectures with Inception modules, the models trained with the learning objective of segmenting the intra-tumoral structures outperformed the models trained with the objective of segmenting the glioma sub-regions for the whole tumor (p \u3c 0.001). The improved performance is linked to multiscale features extracted by newly introduced Inception module and the modified loss function based on the DSC
211102
Gliomas are the largest prevalent and destructive of brain tumors and have crucial parts for the diagnosing and treating of MRI brain tumors during segmentation using computerized methods. Recently, U-Net architecture has achieved impressive brain tumor segmentation, but this role remains challenging due to the differing severity and appearance of gliomas. Therefore, we proposed a novel encoder-decoder architecture called Multi Inception Residual Attention U-Net (MIRAU-Net) in this work. It integrates residual, inception modules with attention gates into U-Net to further enhance brain tumor segmentation performance. Encoderdecoder is connected in this architecture through Inception Residual pathways to decrease the distance between their maps of features. We use the weight crossentropy and generalized Dice (GDL) with focal Tversky loss functions to resolve the class imbalance problem. The evaluation performance of MIRAU-Net checked with Brats 2019 and obtained mean dice similarities of 0.885 for the whole tumor, 0.879 for the core area, and 0.818 for the enhancement tumor. Experiment results reveal that the suggested MIRAU-Net beats its baselines and provides better efficiency than recent techniques for brain tumor segmentation.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020).info:eu-repo/semantics/publishedVersio
Dilated Inception U-Net (DIU-Net) for Brain Tumor Segmentation
Magnetic resonance imaging (MRI) is routinely used for brain tumor diagnosis, treatment planning, and post-treatment surveillance. Recently, various models based on deep neural networks have been proposed for the pixel-level segmentation of tumors in brain MRIs. However, the structural variations, spatial dissimilarities, and intensity inhomogeneity in MRIs make segmentation a challenging task. We propose a new end-to-end brain tumor segmentation architecture based on U-Net that integrates Inception modules and dilated convolutions into its contracting and expanding paths. This allows us to extract local structural as well as global contextual information. We performed segmentation of glioma sub-regions, including tumor core, enhancing tumor, and whole tumor using Brain Tumor Segmentation (BraTS) 2018 dataset. Our proposed model performed significantly better than the state-of-the-art U-Net-based model (p\u3c0.05) for tumor core and whole tumor segmentation
Medical Image Segmentation Review: The success of U-Net
Automatic medical image segmentation is a crucial topic in the medical domain
and successively a critical counterpart in the computer-aided diagnosis
paradigm. U-Net is the most widespread image segmentation architecture due to
its flexibility, optimized modular design, and success in all medical image
modalities. Over the years, the U-Net model achieved tremendous attention from
academic and industrial researchers. Several extensions of this network have
been proposed to address the scale and complexity created by medical tasks.
Addressing the deficiency of the naive U-Net model is the foremost step for
vendors to utilize the proper U-Net variant model for their business. Having a
compendium of different variants in one place makes it easier for builders to
identify the relevant research. Also, for ML researchers it will help them
understand the challenges of the biological tasks that challenge the model. To
address this, we discuss the practical aspects of the U-Net model and suggest a
taxonomy to categorize each network variant. Moreover, to measure the
performance of these strategies in a clinical application, we propose fair
evaluations of some unique and famous designs on well-known datasets. We
provide a comprehensive implementation library with trained models for future
research. In addition, for ease of future studies, we created an online list of
U-Net papers with their possible official implementation. All information is
gathered in https://github.com/NITR098/Awesome-U-Net repository.Comment: Submitted to the IEEE Transactions on Pattern Analysis and Machine
Intelligence Journa
A review on detecting brain tumors using deep learning and magnetic resonance images
Early detection and treatment in the medical field offer a critical opportunity to survive people. However, the brain has a significant role in human life as it handles most human body activities. Accurate diagnosis of brain tumors dramatically helps speed up the patient's recovery and the cost of treatment. Magnetic resonance imaging (MRI) is a commonly used technique due to the massive progress of artificial intelligence in medicine, machine learning, and recently, deep learning has shown significant results in detecting brain tumors. This review paper is a comprehensive article suitable as a starting point for researchers to demonstrate essential aspects of using deep learning in diagnosing brain tumors. More specifically, it has been restricted to only detecting brain tumors (binary classification as normal or tumor) using MRI datasets in 2020 and 2021. In addition, the paper presents the frequently used datasets, convolutional neural network architectures (standard and designed), and transfer learning techniques. The crucial limitations of applying the deep learning approach, including a lack of datasets, overfitting, and vanishing gradient problems, are also discussed. Finally, alternative solutions for these limitations are obtained
Bidirectional ConvLSTMXNet for Brain Tumor Segmentation of MR Images
In recent years, deep learning based networks have achieved good performance in brain tumour segmentation of MR Image. Among the existing networks, U-Net has been successfully applied. In this paper, it is propose deep-learning based Bidirectional Convolutional LSTM XNet (BConvLSTMXNet) for segmentation of brain tumor and using GoogLeNet classify tumor & non-tumor. Evaluated on BRATS-2019 data-set and the results are obtained for classification of tumor and non-tumor with Accuracy: 0.91, Precision: 0.95, Recall: 1.00 & F1-Score: 0.92. Similarly for segmentation of brain tumor obtained Accuracy: 0.99, Specificity: 0.98, Sensitivity: 0.91, Precision: 0.91 & F1-Score: 0.88
- …