3,207 research outputs found
Glioma Diagnosis Aid through CNNs and Fuzzy-C Means for MRI
Glioma is a type of brain tumor that causes mortality in many cases. Early diagnosis is an important factor.
Typically, it is detected through MRI and then either a treatment is applied, or it is removed through surgery.
Deep-learning techniques are becoming popular in medical applications and image-based diagnosis.
Convolutional Neural Networks are the preferred architecture for object detection and classification in images.
In this paper, we present a study to evaluate the efficiency of using CNNs for diagnosis aids in glioma
detection and the improvement of the method when using a clustering method (Fuzzy C-means) for preprocessing
the input MRI dataset. Results offered an accuracy improvement from 0.77 to 0.81 when using
Fuzzy C-Means.Ministerio de Economía y Competitividad TEC2016-77785-
3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context
We present an efficient deep learning approach for the challenging task of
tumor segmentation in multisequence MR images. In recent years, Convolutional
Neural Networks (CNN) have achieved state-of-the-art performances in a large
variety of recognition tasks in medical imaging. Because of the considerable
computational cost of CNNs, large volumes such as MRI are typically processed
by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D
patches. In this paper we introduce a CNN-based model which efficiently
combines the advantages of the short-range 3D context and the long-range 2D
context. To overcome the limitations of specific choices of neural network
architectures, we also propose to merge outputs of several cascaded 2D-3D
models by a voxelwise voting strategy. Furthermore, we propose a network
architecture in which the different MR sequences are processed by separate
subnetworks in order to be more robust to the problem of missing MR sequences.
Finally, a simple and efficient algorithm for training large CNN models is
introduced. We evaluate our method on the public benchmark of the BRATS 2017
challenge on the task of multiclass segmentation of malignant brain tumors. Our
method achieves good performances and produces accurate segmentations with
median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854
(enhancing core). Our approach can be naturally applied to various tasks
involving segmentation of lesions or organs.Comment: Submitted to the journal Computerized Medical Imaging and Graphic
Multiscale, multimodal analysis of tumor heterogeneity in IDH1 mutant vs wild-type diffuse gliomas.
Glioma is recognized to be a highly heterogeneous CNS malignancy, whose diverse cellular composition and cellular interactions have not been well characterized. To gain new clinical- and biological-insights into the genetically-bifurcated IDH1 mutant (mt) vs wildtype (wt) forms of glioma, we integrated data from protein, genomic and MR imaging from 20 treatment-naïve glioma cases and 16 recurrent GBM cases. Multiplexed immunofluorescence (MxIF) was used to generate single cell data for 43 protein markers representing all cancer hallmarks, Genomic sequencing (exome and RNA (normal and tumor) and magnetic resonance imaging (MRI) quantitative features (protocols were T1-post, FLAIR and ADC) from whole tumor, peritumoral edema and enhancing core vs equivalent normal region were also collected from patients. Based on MxIF analysis, 85,767 cells (glioma cases) and 56,304 cells (GBM cases) were used to generate cell-level data for 24 biomarkers. K-means clustering was used to generate 7 distinct groups of cells with divergent biomarker profiles and deconvolution was used to assign RNA data into three classes. Spatial and molecular heterogeneity metrics were generated for the cell data. All features were compared between IDH mt and IDHwt patients and were finally combined to provide a holistic/integrated comparison. Protein expression by hallmark was generally lower in the IDHmt vs wt patients. Molecular and spatial heterogeneity scores for angiogenesis and cell invasion also differed between IDHmt and wt gliomas irrespective of prior treatment and tumor grade; these differences also persisted in the MR imaging features of peritumoral edema and contrast enhancement volumes. A coherent picture of enhanced angiogenesis in IDHwt tumors was derived from multiple platforms (genomic, proteomic and imaging) and scales from individual proteins to cell clusters and heterogeneity, as well as bulk tumor RNA and imaging features. Longer overall survival for IDH1mt glioma patients may reflect mutation-driven alterations in cellular, molecular, and spatial heterogeneity which manifest in discernable radiological manifestations
Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI
This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction.
Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria.
Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019
Recommended from our members
Dynamic low-level context for the detection of mild traumatic brain injury.
Mild traumatic brain injury (mTBI) appears as low contrast lesions in magnetic resonance (MR) imaging. Standard automated detection approaches cannot detect the subtle changes caused by the lesions. The use of context has become integral for the detection of low contrast objects in images. Context is any information that can be used for object detection but is not directly due to the physical appearance of an object in an image. In this paper, new low-level static and dynamic context features are proposed and integrated into a discriminative voxel-level classifier to improve the detection of mTBI lesions. Visual features, including multiple texture measures, are used to give an initial estimate of a lesion. From the initial estimate novel proximity and directional distance, contextual features are calculated and used as features for another classifier. This feature takes advantage of spatial information given by the initial lesion estimate using only the visual features. Dynamic context is captured by the proposed posterior marginal edge distance context feature, which measures the distance from a hard estimate of the lesion at a previous time point. The approach is validated on a temporal mTBI rat model dataset and shown to have improved dice score and convergence compared to other state-of-the-art approaches. Analysis of feature importance and versatility of the approach on other datasets are also provided
- …