797 research outputs found
3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context
We present an efficient deep learning approach for the challenging task of
tumor segmentation in multisequence MR images. In recent years, Convolutional
Neural Networks (CNN) have achieved state-of-the-art performances in a large
variety of recognition tasks in medical imaging. Because of the considerable
computational cost of CNNs, large volumes such as MRI are typically processed
by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D
patches. In this paper we introduce a CNN-based model which efficiently
combines the advantages of the short-range 3D context and the long-range 2D
context. To overcome the limitations of specific choices of neural network
architectures, we also propose to merge outputs of several cascaded 2D-3D
models by a voxelwise voting strategy. Furthermore, we propose a network
architecture in which the different MR sequences are processed by separate
subnetworks in order to be more robust to the problem of missing MR sequences.
Finally, a simple and efficient algorithm for training large CNN models is
introduced. We evaluate our method on the public benchmark of the BRATS 2017
challenge on the task of multiclass segmentation of malignant brain tumors. Our
method achieves good performances and produces accurate segmentations with
median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854
(enhancing core). Our approach can be naturally applied to various tasks
involving segmentation of lesions or organs.Comment: Submitted to the journal Computerized Medical Imaging and Graphic
HeMIS: Hetero-Modal Image Segmentation
We introduce a deep learning image segmentation framework that is extremely
robust to missing imaging modalities. Instead of attempting to impute or
synthesize missing data, the proposed approach learns, for each modality, an
embedding of the input image into a single latent vector space for which
arithmetic operations (such as taking the mean) are well defined. Points in
that space, which are averaged over modalities available at inference time, can
then be further processed to yield the desired segmentation. As such, any
combinatorial subset of available modalities can be provided as input, without
having to learn a combinatorial number of imputation models. Evaluated on two
neurological MRI datasets (brain tumors and MS lesions), the approach yields
state-of-the-art segmentation results when provided with all modalities;
moreover, its performance degrades remarkably gracefully when modalities are
removed, significantly more so than alternative mean-filling or other synthesis
approaches.Comment: Accepted as an oral presentation at MICCAI 201
Brain Tumor Segmentation with Deep Neural Networks
In this paper, we present a fully automatic brain tumor segmentation method
based on Deep Neural Networks (DNNs). The proposed networks are tailored to
glioblastomas (both low and high grade) pictured in MR images. By their very
nature, these tumors can appear anywhere in the brain and have almost any kind
of shape, size, and contrast. These reasons motivate our exploration of a
machine learning solution that exploits a flexible, high capacity DNN while
being extremely efficient. Here, we give a description of different model
choices that we've found to be necessary for obtaining competitive performance.
We explore in particular different architectures based on Convolutional Neural
Networks (CNN), i.e. DNNs specifically adapted to image data.
We present a novel CNN architecture which differs from those traditionally
used in computer vision. Our CNN exploits both local features as well as more
global contextual features simultaneously. Also, different from most
traditional uses of CNNs, our networks use a final layer that is a
convolutional implementation of a fully connected layer which allows a 40 fold
speed up. We also describe a 2-phase training procedure that allows us to
tackle difficulties related to the imbalance of tumor labels. Finally, we
explore a cascade architecture in which the output of a basic CNN is treated as
an additional source of information for a subsequent CNN. Results reported on
the 2013 BRATS test dataset reveal that our architecture improves over the
currently published state-of-the-art while being over 30 times faster
TBI Contusion Segmentation from MRI using Convolutional Neural Networks
Traumatic brain injury (TBI) is caused by a sudden trauma to the head that
may result in hematomas and contusions and can lead to stroke or chronic
disability. An accurate quantification of the lesion volumes and their
locations is essential to understand the pathophysiology of TBI and its
progression. In this paper, we propose a fully convolutional neural network
(CNN) model to segment contusions and lesions from brain magnetic resonance
(MR) images of patients with TBI. The CNN architecture proposed here was based
on a state of the art CNN architecture from Google, called Inception. Using a
3-layer Inception network, lesions are segmented from multi-contrast MR images.
When compared with two recent TBI lesion segmentation methods, one based on CNN
(called DeepMedic) and another based on random forests, the proposed algorithm
showed improved segmentation accuracy on images of 18 patients with mild to
severe TBI. Using a leave-one-out cross validation, the proposed model achieved
a median Dice of 0.75, which was significantly better (p<0.01) than the two
competing methods.Comment: https://ieeexplore.ieee.org/abstract/document/8363545/, IEEE 15th
International Symposium on Biomedical Imaging (ISBI 2018
- âŠ