99 research outputs found

    Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks

    Full text link
    A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.Comment: 12 pages, 5 figures. MICCAI Brats Challenge 201

    MRI image segmentation using machine learning networks and level set approaches

    Get PDF
    The segmented brain tissues from magnetic resonance images (MRI) always pose substantive challenges to the clinical researcher community, especially while making precise estimation of such tissues. In the recent years, advancements in deep learning techniques, more specifically in fully convolution neural networks (FCN) have yielded path breaking results in segmenting brain tumour tissues with pin-point accuracy and precision, much to the relief of clinical physicians and researchers alike. A new hybrid deep learning architecture combining SegNet and U-Net techniques to segment brain tissue is proposed here. Here, a skip connection of the concerned U-Net network was suitably explored. The results indicated optimal multi-scale information generated from the SegNet, which was further exploited to obtain precise tissue boundaries from the brain images. Further, in order to ensure that the segmentation method performed better in conjunction with precisely delineated contours, the output is incorporated as the level set layer in the deep learning network. The proposed method primarily focused on analysing brain tumor segmentation (BraTS) 2017 and BraTS 2018, dedicated datasets dealing with MRI brain tumour. The results clearly indicate better performance in segmenting brain tumours than existing ones

    Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI

    Get PDF
    This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction. Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria. Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019

    Brain Tumor Segmentation and Classification Using Neural Networks

    Get PDF
    Magnetic Resonance Imaging (MRI) is widely used in the diagnostic and treatment evaluation of brain tumors. Segmentation is a critical step of the tumor assessment, which usually is a time-consuming task by conventional image analysis methods. In this thesis, I utilized deep learning methods to automate the tumor segmentation and classification tasks. Two models were used, a segmentation model and a classification model. I used U-Net for the segmentation task and a Convolutional Neural Network followed by fully connected layers for the classification task. I evaluated networks on the Mul- timodal Brain Tumor Segmentation Challenge 2020 (BraTS 2020) dataset. Image slices were sampled from the axial axis using three modalities, T1- Contrast-Enhanced, T2-weighted, and Fluid-attenuated inversion recovery. 2-dimensional image slices were used for training in the segmentation task, and annotated images were used for training during the classification task

    Deep learning networks for automatic brain tumour segmentation from MRI data

    Get PDF
    Early diagnosis and appropriate treatment planning are the keys to brain tumour patients’ survial rate. Radiotherapy (RT) is a common treatment for brain tumour. RT planning requires segmentation of a gross tumour volume (GTV). Manual segmentation of the brain tumour done by experts oncologists or clinicians is time-consuming and subject to intra- and inter-observer variability. This research presents novel image processing and deep learning methods for automatic brain tumour regions segmentation from MRI data. The MRI data of brain tumour patients from Brain Tumour Segmentation or BraTS dataset from 2018-2021 are used in this study. 2D deep neural networks for semantic segmentation of brain tumour regions from 2D axial multimodal (T1, T1Gd, T2, and FLAIR) MRI slices is presented. This proposed network is trained and tested on manual consensus labels by experts from BraTS 2018 dataset. The network has similar architecture to U-Net, which consists of a stream of down-sampling blocks for feature extraction and a reduction in the image resolution, then a stream of up-sampling blocks to recover image’s resolution, integrate features, and classify pixels. The proposed network improved feature extraction by introducing two-pathways feature extraction in the first block of the down-sampling to extract local and global features directly from the input images. Transposed convolution was employed in up-sampling path. The proposed network was evaluated for the segmentation of five tumour regions: whole tumour (WT), tumour core(TC), necrotic and nonenhancing tumour (NCR/NET), edema (ED), and enhancing tumour (ET). The results obtained from the modified U-Net achieved mean Dice Similarity Coefficient (DSC) of 0.83, 0.62, 0.45, 0.69, and 0.70 for WT, TC, NCR/NET, ED, and ET, respectively. These results show a 9% improvement compared to the original U-Net’s performance. 2D predicted segmentation obtained from the proposed network are stacked to visualise the tumour volume. A novel deep neural network called 2D TwoPath U-Net for multi-class segmentation of brain tumour region is described. The proposed network has improved two-pathways feature extraction to provided cascaded local and global features from 2D multimodal MRI input. The proposed networks was trained using MRI data from BraTS 2019 dataset and test using MRI data from BraTS 2020 dataset. Data augmentation and different training strategies including the use of full-size images and patches were employed to improve the predicted segmentation. The results obtained from the proposed network feature all intra-structure (NCR/NET, ED, ET) of tumour to form the segmentation of WT and TC regions, and achieved mean DSC of 0.72 and 0.66 for WT and TC, respectively. A novel 3D deep neural network for brain tumour regions segmentation from MRI data called 3D TwoPath U-Net is described. The network has a similar structure to the 2D TwoPath U-Net, and uses two-pathways feature extraction to capture local and global features from volumetric MRI data from BraTS 2021 dataset. The volumetric data were created using T1Gd and FLAIR modalities. To construct a 3D deep neural network with significantly high computational parameters, cropped voxels from volumetric MRI were used to reduce the input resolution. Furthermore, high-performance GPUs were employed to implement the network. The proposed network achieved the mean DSC of 0.87, 0.70, and 0.58 for WT, TC, and ET segmentation, respectively, which represents a 25% improvement compared to the previous segmentation results obtained using the 2D approach. Moreover, the 3D smooth tumour volume generated from the proposed network output provide a more visually representative depiction of the tumour.Early diagnosis and appropriate treatment planning are the keys to brain tumour patients’ survial rate. Radiotherapy (RT) is a common treatment for brain tumour. RT planning requires segmentation of a gross tumour volume (GTV). Manual segmentation of the brain tumour done by experts oncologists or clinicians is time-consuming and subject to intra- and inter-observer variability. This research presents novel image processing and deep learning methods for automatic brain tumour regions segmentation from MRI data. The MRI data of brain tumour patients from Brain Tumour Segmentation or BraTS dataset from 2018-2021 are used in this study. 2D deep neural networks for semantic segmentation of brain tumour regions from 2D axial multimodal (T1, T1Gd, T2, and FLAIR) MRI slices is presented. This proposed network is trained and tested on manual consensus labels by experts from BraTS 2018 dataset. The network has similar architecture to U-Net, which consists of a stream of down-sampling blocks for feature extraction and a reduction in the image resolution, then a stream of up-sampling blocks to recover image’s resolution, integrate features, and classify pixels. The proposed network improved feature extraction by introducing two-pathways feature extraction in the first block of the down-sampling to extract local and global features directly from the input images. Transposed convolution was employed in up-sampling path. The proposed network was evaluated for the segmentation of five tumour regions: whole tumour (WT), tumour core(TC), necrotic and nonenhancing tumour (NCR/NET), edema (ED), and enhancing tumour (ET). The results obtained from the modified U-Net achieved mean Dice Similarity Coefficient (DSC) of 0.83, 0.62, 0.45, 0.69, and 0.70 for WT, TC, NCR/NET, ED, and ET, respectively. These results show a 9% improvement compared to the original U-Net’s performance. 2D predicted segmentation obtained from the proposed network are stacked to visualise the tumour volume. A novel deep neural network called 2D TwoPath U-Net for multi-class segmentation of brain tumour region is described. The proposed network has improved two-pathways feature extraction to provided cascaded local and global features from 2D multimodal MRI input. The proposed networks was trained using MRI data from BraTS 2019 dataset and test using MRI data from BraTS 2020 dataset. Data augmentation and different training strategies including the use of full-size images and patches were employed to improve the predicted segmentation. The results obtained from the proposed network feature all intra-structure (NCR/NET, ED, ET) of tumour to form the segmentation of WT and TC regions, and achieved mean DSC of 0.72 and 0.66 for WT and TC, respectively. A novel 3D deep neural network for brain tumour regions segmentation from MRI data called 3D TwoPath U-Net is described. The network has a similar structure to the 2D TwoPath U-Net, and uses two-pathways feature extraction to capture local and global features from volumetric MRI data from BraTS 2021 dataset. The volumetric data were created using T1Gd and FLAIR modalities. To construct a 3D deep neural network with significantly high computational parameters, cropped voxels from volumetric MRI were used to reduce the input resolution. Furthermore, high-performance GPUs were employed to implement the network. The proposed network achieved the mean DSC of 0.87, 0.70, and 0.58 for WT, TC, and ET segmentation, respectively, which represents a 25% improvement compared to the previous segmentation results obtained using the 2D approach. Moreover, the 3D smooth tumour volume generated from the proposed network output provide a more visually representative depiction of the tumour

    Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation

    Get PDF
    Automatic brain tumor segmentation plays an important role for diagnosis, surgical planning and treatment assessment of brain tumors. Deep convolutional neural networks (CNNs) have been widely used for this task. Due to the relatively small data set for training, data augmentation at training time has been commonly used for better performance of CNNs. Recent works also demonstrated the usefulness of using augmentation at test time, in addition to training time, for achieving more robust predictions. We investigate how test-time augmentation can improve CNNs' performance for brain tumor segmentation. We used different underpinning network structures and augmented the image by 3D rotation, flipping, scaling and adding random noise at both training and test time. Experiments with BraTS 2018 training and validation set show that test-time augmentation helps to improve the brain tumor segmentation accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201

    Deep Learning Methods for Classification of Gliomas and Their Molecular Subtypes, From Central Learning to Federated Learning

    Get PDF
    The most common type of brain cancer in adults are gliomas. Under the updated 2016 World Health Organization (WHO) tumor classification in central nervous system (CNS), identification of molecular subtypes of gliomas is important. For low grade gliomas (LGGs), prediction of molecular subtypes by observing magnetic resonance imaging (MRI) scans might be difficult without taking biopsy. With the development of machine learning (ML) methods such as deep learning (DL), molecular based classification methods have shown promising results from MRI scans that may assist clinicians for prognosis and deciding on a treatment strategy. However, DL requires large amount of training datasets with tumor class labels and tumor boundary annotations. Manual annotation of tumor boundary is a time consuming and expensive process.The thesis is based on the work developed in five papers on gliomas and their molecular subtypes. We propose novel methods that provide improved performance. \ua0The proposed methods consist of a multi-stream convolutional autoencoder (CAE)-based classifier, a deep convolutional generative adversarial network (DCGAN) to enlarge the training dataset, a CycleGAN to handle domain shift, a novel federated learning (FL) scheme to allow local client-based training with dataset protection, and employing bounding boxes to MRIs when tumor boundary annotations are not available.Experimental results showed that DCGAN generated MRIs have enlarged the original training dataset size and have improved the classification performance on test sets. CycleGAN showed good domain adaptation on multiple source datasets and improved the classification performance. The proposed FL scheme showed a slightly degraded performance as compare to that of central learning (CL) approach while protecting dataset privacy. Using tumor bounding boxes showed to be an alternative approach to tumor boundary annotation for tumor classification and segmentation, with a trade-off between a slight decrease in performance and saving time in manual marking by clinicians. The proposed methods may benefit the future research in bringing DL tools into clinical practice for assisting tumor diagnosis and help the decision making process
    • …
    corecore