1,814 research outputs found
3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context
We present an efficient deep learning approach for the challenging task of
tumor segmentation in multisequence MR images. In recent years, Convolutional
Neural Networks (CNN) have achieved state-of-the-art performances in a large
variety of recognition tasks in medical imaging. Because of the considerable
computational cost of CNNs, large volumes such as MRI are typically processed
by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D
patches. In this paper we introduce a CNN-based model which efficiently
combines the advantages of the short-range 3D context and the long-range 2D
context. To overcome the limitations of specific choices of neural network
architectures, we also propose to merge outputs of several cascaded 2D-3D
models by a voxelwise voting strategy. Furthermore, we propose a network
architecture in which the different MR sequences are processed by separate
subnetworks in order to be more robust to the problem of missing MR sequences.
Finally, a simple and efficient algorithm for training large CNN models is
introduced. We evaluate our method on the public benchmark of the BRATS 2017
challenge on the task of multiclass segmentation of malignant brain tumors. Our
method achieves good performances and produces accurate segmentations with
median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854
(enhancing core). Our approach can be naturally applied to various tasks
involving segmentation of lesions or organs.Comment: Submitted to the journal Computerized Medical Imaging and Graphic
3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes
While deep convolutional neural networks (CNN) have been successfully applied
for 2D image analysis, it is still challenging to apply them to 3D anisotropic
volumes, especially when the within-slice resolution is much higher than the
between-slice resolution and when the amount of 3D volumes is relatively small.
On one hand, direct learning of CNN with 3D convolution kernels suffers from
the lack of data and likely ends up with poor generalization; insufficient GPU
memory limits the model size or representational power. On the other hand,
applying 2D CNN with generalizable features to 2D slices ignores between-slice
information. Coupling 2D network with LSTM to further handle the between-slice
information is not optimal due to the difficulty in LSTM learning. To overcome
the above challenges, we propose a 3D Anisotropic Hybrid Network (AH-Net) that
transfers convolutional features learned from 2D images to 3D anisotropic
volumes. Such a transfer inherits the desired strong generalization capability
for within-slice information while naturally exploiting between-slice
information for more effective modelling. The focal loss is further utilized
for more effective end-to-end learning. We experiment with the proposed 3D
AH-Net on two different medical image analysis tasks, namely lesion detection
from a Digital Breast Tomosynthesis volume, and liver and liver tumor
segmentation from a Computed Tomography volume and obtain the state-of-the-art
results
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning
Convolutional neural networks (CNNs) have achieved state-of-the-art
performance for automatic medical image segmentation. However, they have not
demonstrated sufficiently accurate and robust results for clinical use. In
addition, they are limited by the lack of image-specific adaptation and the
lack of generalizability to previously unseen object classes. To address these
problems, we propose a novel deep learning-based framework for interactive
segmentation by incorporating CNNs into a bounding box and scribble-based
segmentation pipeline. We propose image-specific fine-tuning to make a CNN
model adaptive to a specific test image, which can be either unsupervised
(without additional user interactions) or supervised (with additional
scribbles). We also propose a weighted loss function considering network and
interaction-based uncertainty for the fine-tuning. We applied this framework to
two applications: 2D segmentation of multiple organs from fetal MR slices,
where only two types of these organs were annotated for training; and 3D
segmentation of brain tumor core (excluding edema) and whole brain tumor
(including edema) from different MR sequences, where only tumor cores in one MR
sequence were annotated for training. Experimental results show that 1) our
model is more robust to segment previously unseen objects than state-of-the-art
CNNs; 2) image-specific fine-tuning with the proposed weighted loss function
significantly improves segmentation accuracy; and 3) our method leads to
accurate results with fewer user interactions and less user time than
traditional interactive segmentation methods.Comment: 11 pages, 11 figure
- …