1,069 research outputs found
3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context
We present an efficient deep learning approach for the challenging task of
tumor segmentation in multisequence MR images. In recent years, Convolutional
Neural Networks (CNN) have achieved state-of-the-art performances in a large
variety of recognition tasks in medical imaging. Because of the considerable
computational cost of CNNs, large volumes such as MRI are typically processed
by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D
patches. In this paper we introduce a CNN-based model which efficiently
combines the advantages of the short-range 3D context and the long-range 2D
context. To overcome the limitations of specific choices of neural network
architectures, we also propose to merge outputs of several cascaded 2D-3D
models by a voxelwise voting strategy. Furthermore, we propose a network
architecture in which the different MR sequences are processed by separate
subnetworks in order to be more robust to the problem of missing MR sequences.
Finally, a simple and efficient algorithm for training large CNN models is
introduced. We evaluate our method on the public benchmark of the BRATS 2017
challenge on the task of multiclass segmentation of malignant brain tumors. Our
method achieves good performances and produces accurate segmentations with
median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854
(enhancing core). Our approach can be naturally applied to various tasks
involving segmentation of lesions or organs.Comment: Submitted to the journal Computerized Medical Imaging and Graphic
Glioma Diagnosis Aid through CNNs and Fuzzy-C Means for MRI
Glioma is a type of brain tumor that causes mortality in many cases. Early diagnosis is an important factor.
Typically, it is detected through MRI and then either a treatment is applied, or it is removed through surgery.
Deep-learning techniques are becoming popular in medical applications and image-based diagnosis.
Convolutional Neural Networks are the preferred architecture for object detection and classification in images.
In this paper, we present a study to evaluate the efficiency of using CNNs for diagnosis aids in glioma
detection and the improvement of the method when using a clustering method (Fuzzy C-means) for preprocessing
the input MRI dataset. Results offered an accuracy improvement from 0.77 to 0.81 when using
Fuzzy C-Means.Ministerio de Economía y Competitividad TEC2016-77785-
Prospects for Theranostics in Neurosurgical Imaging: Empowering Confocal Laser Endomicroscopy Diagnostics via Deep Learning
Confocal laser endomicroscopy (CLE) is an advanced optical fluorescence
imaging technology that has the potential to increase intraoperative precision,
extend resection, and tailor surgery for malignant invasive brain tumors
because of its subcellular dimension resolution. Despite its promising
diagnostic potential, interpreting the gray tone fluorescence images can be
difficult for untrained users. In this review, we provide a detailed
description of bioinformatical analysis methodology of CLE images that begins
to assist the neurosurgeon and pathologist to rapidly connect on-the-fly
intraoperative imaging, pathology, and surgical observation into a
conclusionary system within the concept of theranostics. We present an overview
and discuss deep learning models for automatic detection of the diagnostic CLE
images and discuss various training regimes and ensemble modeling effect on the
power of deep learning predictive models. Two major approaches reviewed in this
paper include the models that can automatically classify CLE images into
diagnostic/nondiagnostic, glioma/nonglioma, tumor/injury/normal categories and
models that can localize histological features on the CLE images using weakly
supervised methods. We also briefly review advances in the deep learning
approaches used for CLE image analysis in other organs. Significant advances in
speed and precision of automated diagnostic frame selection would augment the
diagnostic potential of CLE, improve operative workflow and integration into
brain tumor surgery. Such technology and bioinformatics analytics lend
themselves to improved precision, personalization, and theranostics in brain
tumor treatment.Comment: See the final version published in Frontiers in Oncology here:
https://www.frontiersin.org/articles/10.3389/fonc.2018.00240/ful
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
- …