29 research outputs found
A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans
Deep neural networks have been widely adopted for automatic organ
segmentation from abdominal CT scans. However, the segmentation accuracy of
some small organs (e.g., the pancreas) is sometimes below satisfaction,
arguably because deep networks are easily disrupted by the complex and variable
background regions which occupies a large fraction of the input volume. In this
paper, we formulate this problem into a fixed-point model which uses a
predicted segmentation mask to shrink the input region. This is motivated by
the fact that a smaller input region often leads to more accurate segmentation.
In the training process, we use the ground-truth annotation to generate
accurate input regions and optimize network weights. On the testing stage, we
fix the network parameters and update the segmentation results in an iterative
manner. We evaluate our approach on the NIH pancreas segmentation dataset, and
outperform the state-of-the-art by more than 4%, measured by the average
Dice-S{\o}rensen Coefficient (DSC). In addition, we report 62.43% DSC in the
worst case, which guarantees the reliability of our approach in clinical
applications.Comment: Accepted to MICCAI 2017 (8 pages, 3 figures
DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation
Automatic organ segmentation is an important yet challenging problem for
medical image analysis. The pancreas is an abdominal organ with very high
anatomical variability. This inhibits previous segmentation methods from
achieving high accuracies, especially compared to other organs such as the
liver, heart or kidneys. In this paper, we present a probabilistic bottom-up
approach for pancreas segmentation in abdominal computed tomography (CT) scans,
using multi-level deep convolutional networks (ConvNets). We propose and
evaluate several variations of deep ConvNets in the context of hierarchical,
coarse-to-fine classification on image patches and regions, i.e. superpixels.
We first present a dense labeling of local image patches via
and nearest neighbor fusion. Then we describe a regional
ConvNet () that samples a set of bounding boxes around
each image superpixel at different scales of contexts in a "zoom-out" fashion.
Our ConvNets learn to assign class probabilities for each superpixel region of
being pancreas. Last, we study a stacked leveraging
the joint space of CT intensities and the dense
probability maps. Both 3D Gaussian smoothing and 2D conditional random fields
are exploited as structured predictions for post-processing. We evaluate on CT
images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity
Coefficient of 83.66.3% in training and 71.810.7% in testing.Comment: To be presented at MICCAI 2015 - 18th International Conference on
Medical Computing and Computer Assisted Interventions, Munich, German
Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation
We aim at segmenting small organs (e.g., the pancreas) from abdominal CT
scans. As the target often occupies a relatively small region in the input
image, deep neural networks can be easily confused by the complex and variable
background. To alleviate this, researchers proposed a coarse-to-fine approach,
which used prediction from the first (coarse) stage to indicate a smaller input
region for the second (fine) stage. Despite its effectiveness, this algorithm
dealt with two stages individually, which lacked optimizing a global energy
function, and limited its ability to incorporate multi-stage visual cues.
Missing contextual information led to unsatisfying convergence in iterations,
and that the fine stage sometimes produced even lower segmentation accuracy
than the coarse stage.
This paper presents a Recurrent Saliency Transformation Network. The key
innovation is a saliency transformation module, which repeatedly converts the
segmentation probability map from the previous iteration as spatial weights and
applies these weights to the current iteration. This brings us two-fold
benefits. In training, it allows joint optimization over the deep networks
dealing with different input scales. In testing, it propagates multi-stage
visual information throughout iterations to improve segmentation accuracy.
Experiments in the NIH pancreas segmentation dataset demonstrate the
state-of-the-art accuracy, which outperforms the previous best by an average of
over 2%. Much higher accuracies are also reported on several small organs in a
larger dataset collected by ourselves. In addition, our approach enjoys better
convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures
Hierarchical Framework for Automatic Pancreas Segmentation in MRI Using Continuous Max-flow and Min-Cuts Approach
Accurate, automatic and robust segmentation of the pancreas in medical image scans remains a challenging but important prerequisite for computer-aided diagnosis (CADx). This paper presents a tool for automatic pancreas segmentation in magnetic resonance imaging (MRI) scans. Proposed is a framework that employs a hierarchical pooling of information as follows: identify major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; perform 3D segmentation by employing continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, curvature and position between distinct contours. The proposed method is evaluated on a dataset of 20 MRI volumes, achieving a mean Dice Similarity coefficient of 75.5 ± 7.0% and a mean Jaccard Index coefficient of 61.2 ± 9.2%
DRINet for medical image segmentation
Convolutional neural networks (CNNs) have revolutionized medical image analysis over the past few years. The UNet architecture is one of the most well-known CNN architectures for semantic segmentation and has achieved remarkable successes in many different medical image segmentation applications. The U-Net architecture consists of standard convolution layers, pooling layers, and upsampling layers. These convolution layers learn representative features of input images and construct segmentations based on the features. However, the features learned by standard convolution layers are not distinctive when the differences among different categories are subtle in terms of intensity, location, shape, and size. In this paper, we propose a novel CNN architecture, called Dense-Res-Inception Net (DRINet), which addresses this challenging problem. The proposed DRINet consists of three blocks, namely a convolutional block with dense connections, a deconvolutional block with residual Inception modules, and an unpooling block. Our proposed architecture outperforms the U-Net in three different challenging applications, namely multi-class segmentation of cerebrospinal fluid (CSF) on brain CT images, multi-organ segmentation on abdominal CT images, multi-class brain tumour segmentation on MR images
Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies
Multi-atlas segmentation is a widely used tool in medical image analysis,
providing robust and accurate results by learning from annotated atlas
datasets. However, the availability of fully annotated atlas images for
training is limited due to the time required for the labelling task.
Segmentation methods requiring only a proportion of each atlas image to be
labelled could therefore reduce the workload on expert raters tasked with
annotating atlas images. To address this issue, we first re-examine the
labelling problem common in many existing approaches and formulate its solution
in terms of a Markov Random Field energy minimisation problem on a graph
connecting atlases and the target image. This provides a unifying framework for
multi-atlas segmentation. We then show how modifications in the graph
configuration of the proposed framework enable the use of partially annotated
atlas images and investigate different partial annotation strategies. The
proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets
for hippocampal and cardiac segmentation. Experiments were performed aimed at
(1) recreating existing segmentation techniques with the proposed framework and
(2) demonstrating the potential of employing sparsely annotated atlas data for
multi-atlas segmentation