29 research outputs found
A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans
Deep neural networks have been widely adopted for automatic organ
segmentation from abdominal CT scans. However, the segmentation accuracy of
some small organs (e.g., the pancreas) is sometimes below satisfaction,
arguably because deep networks are easily disrupted by the complex and variable
background regions which occupies a large fraction of the input volume. In this
paper, we formulate this problem into a fixed-point model which uses a
predicted segmentation mask to shrink the input region. This is motivated by
the fact that a smaller input region often leads to more accurate segmentation.
In the training process, we use the ground-truth annotation to generate
accurate input regions and optimize network weights. On the testing stage, we
fix the network parameters and update the segmentation results in an iterative
manner. We evaluate our approach on the NIH pancreas segmentation dataset, and
outperform the state-of-the-art by more than 4%, measured by the average
Dice-S{\o}rensen Coefficient (DSC). In addition, we report 62.43% DSC in the
worst case, which guarantees the reliability of our approach in clinical
applications.Comment: Accepted to MICCAI 2017 (8 pages, 3 figures
DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation
Automatic organ segmentation is an important yet challenging problem for
medical image analysis. The pancreas is an abdominal organ with very high
anatomical variability. This inhibits previous segmentation methods from
achieving high accuracies, especially compared to other organs such as the
liver, heart or kidneys. In this paper, we present a probabilistic bottom-up
approach for pancreas segmentation in abdominal computed tomography (CT) scans,
using multi-level deep convolutional networks (ConvNets). We propose and
evaluate several variations of deep ConvNets in the context of hierarchical,
coarse-to-fine classification on image patches and regions, i.e. superpixels.
We first present a dense labeling of local image patches via
and nearest neighbor fusion. Then we describe a regional
ConvNet () that samples a set of bounding boxes around
each image superpixel at different scales of contexts in a "zoom-out" fashion.
Our ConvNets learn to assign class probabilities for each superpixel region of
being pancreas. Last, we study a stacked leveraging
the joint space of CT intensities and the dense
probability maps. Both 3D Gaussian smoothing and 2D conditional random fields
are exploited as structured predictions for post-processing. We evaluate on CT
images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity
Coefficient of 83.66.3% in training and 71.810.7% in testing.Comment: To be presented at MICCAI 2015 - 18th International Conference on
Medical Computing and Computer Assisted Interventions, Munich, German
3D FCN Feature Driven Regression Forest-Based Pancreas Localization and Segmentation
This paper presents a fully automated atlas-based pancreas segmentation
method from CT volumes utilizing 3D fully convolutional network (FCN)
feature-based pancreas localization. Segmentation of the pancreas is difficult
because it has larger inter-patient spatial variations than other organs.
Previous pancreas segmentation methods failed to deal with such variations. We
propose a fully automated pancreas segmentation method that contains novel
localization and segmentation. Since the pancreas neighbors many other organs,
its position and size are strongly related to the positions of the surrounding
organs. We estimate the position and the size of the pancreas (localized) from
global features by regression forests. As global features, we use intensity
differences and 3D FCN deep learned features, which include automatically
extracted essential features for segmentation. We chose 3D FCN features from a
trained 3D U-Net, which is trained to perform multi-organ segmentation. The
global features include both the pancreas and surrounding organ information.
After localization, a patient-specific probabilistic atlas-based pancreas
segmentation is performed. In evaluation results with 146 CT volumes, we
achieved 60.6% of the Jaccard index and 73.9% of the Dice overlap.Comment: Presented in MICCAI 2017 workshop, DLMIA 2017 (Deep Learning in
Medical Image Analysis and Multimodal Learning for Clinical Decision Support
Feasibility of automated 3-dimensional magnetic resonance imaging pancreas segmentation.
PurposeWith the advent of MR guided radiotherapy, internal organ motion can be imaged simultaneously during treatment. In this study, we evaluate the feasibility of pancreas MRI segmentation using state-of-the-art segmentation methods.Methods and materialT2 weighted HASTE and T1 weighted VIBE images were acquired on 3 patients and 2 healthy volunteers for a total of 12 imaging volumes. A novel dictionary learning (DL) method was used to segment the pancreas and compared to t mean-shift merging (MSM), distance regularized level set (DRLS), graph cuts (GC) and the segmentation results were compared to manual contours using Dice's index (DI), Hausdorff distance and shift of the-center-of-the-organ (SHIFT).ResultsAll VIBE images were successfully segmented by at least one of the auto-segmentation method with DI >0.83 and SHIFT ≤2 mm using the best automated segmentation method. The automated segmentation error of HASTE images was significantly greater. DL is statistically superior to the other methods in Dice's overlapping index. For the Hausdorff distance and SHIFT measurement, DRLS and DL performed slightly superior to the GC method, and substantially superior to MSM. DL required least human supervision and was faster to compute.ConclusionOur study demonstrated potential feasibility of automated segmentation of the pancreas on MRI images with minimal human supervision at the beginning of imaging acquisition. The achieved accuracy is promising for organ localization
Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation
We aim at segmenting small organs (e.g., the pancreas) from abdominal CT
scans. As the target often occupies a relatively small region in the input
image, deep neural networks can be easily confused by the complex and variable
background. To alleviate this, researchers proposed a coarse-to-fine approach,
which used prediction from the first (coarse) stage to indicate a smaller input
region for the second (fine) stage. Despite its effectiveness, this algorithm
dealt with two stages individually, which lacked optimizing a global energy
function, and limited its ability to incorporate multi-stage visual cues.
Missing contextual information led to unsatisfying convergence in iterations,
and that the fine stage sometimes produced even lower segmentation accuracy
than the coarse stage.
This paper presents a Recurrent Saliency Transformation Network. The key
innovation is a saliency transformation module, which repeatedly converts the
segmentation probability map from the previous iteration as spatial weights and
applies these weights to the current iteration. This brings us two-fold
benefits. In training, it allows joint optimization over the deep networks
dealing with different input scales. In testing, it propagates multi-stage
visual information throughout iterations to improve segmentation accuracy.
Experiments in the NIH pancreas segmentation dataset demonstrate the
state-of-the-art accuracy, which outperforms the previous best by an average of
over 2%. Much higher accuracies are also reported on several small organs in a
larger dataset collected by ourselves. In addition, our approach enjoys better
convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures
Intelligent Segmentation of Medical Images using Fuzzy Bitplane Thresholding
The performance of assessment in medical image segmentation is highly correlated with the extraction of anatomic structures from them, and the major task is how to separate the regions of interests from the background and soft tissues successfully. This paper proposes a fuzzy logic based bitplane method to automatically segment the background of images and to locate the region of interest of medical images. This segmentation algorithm consists of three steps, namely identification, rule firing, and inference. In the first step, we begin by identifying the bitplanes that represent the lungs clearly. For this purpose, the intensity value of a pixel is separated into bitplanes. In the second step, the triple signum function assigns an optimum threshold based on the grayscale values for the anatomical structure present in the medical images. Fuzzy rules are formed based on the available bitplanes to form the membership table and are stored in a knowledge base. Finally, rules are fired to assign final segmentation values through the inference process. The proposed new metrics are used to measure the accuracy of the segmentation method. From the analysis, it is observed that the proposed metrics are more suitable for the estimation of segmentation accuracy. The results obtained from this work show that the proposed method performs segmentation effectively for the different classes of medical images