126 research outputs found
Automated Segmentation of Pulmonary Lobes using Coordination-Guided Deep Neural Networks
The identification of pulmonary lobes is of great importance in disease
diagnosis and treatment. A few lung diseases have regional disorders at lobar
level. Thus, an accurate segmentation of pulmonary lobes is necessary. In this
work, we propose an automated segmentation of pulmonary lobes using
coordination-guided deep neural networks from chest CT images. We first employ
an automated lung segmentation to extract the lung area from CT image, then
exploit volumetric convolutional neural network (V-net) for segmenting the
pulmonary lobes. To reduce the misclassification of different lobes, we
therefore adopt coordination-guided convolutional layers (CoordConvs) that
generate additional feature maps of the positional information of pulmonary
lobes. The proposed model is trained and evaluated on a few publicly available
datasets and has achieved the state-of-the-art accuracy with a mean Dice
coefficient index of 0.947 0.044.Comment: ISBI 2019 (Oral
Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation
We aim at segmenting small organs (e.g., the pancreas) from abdominal CT
scans. As the target often occupies a relatively small region in the input
image, deep neural networks can be easily confused by the complex and variable
background. To alleviate this, researchers proposed a coarse-to-fine approach,
which used prediction from the first (coarse) stage to indicate a smaller input
region for the second (fine) stage. Despite its effectiveness, this algorithm
dealt with two stages individually, which lacked optimizing a global energy
function, and limited its ability to incorporate multi-stage visual cues.
Missing contextual information led to unsatisfying convergence in iterations,
and that the fine stage sometimes produced even lower segmentation accuracy
than the coarse stage.
This paper presents a Recurrent Saliency Transformation Network. The key
innovation is a saliency transformation module, which repeatedly converts the
segmentation probability map from the previous iteration as spatial weights and
applies these weights to the current iteration. This brings us two-fold
benefits. In training, it allows joint optimization over the deep networks
dealing with different input scales. In testing, it propagates multi-stage
visual information throughout iterations to improve segmentation accuracy.
Experiments in the NIH pancreas segmentation dataset demonstrate the
state-of-the-art accuracy, which outperforms the previous best by an average of
over 2%. Much higher accuracies are also reported on several small organs in a
larger dataset collected by ourselves. In addition, our approach enjoys better
convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures
Extracting Lungs from CT Images using Fully Convolutional Networks
Analysis of cancer and other pathological diseases, like the interstitial
lung diseases (ILDs), is usually possible through Computed Tomography (CT)
scans. To aid this, a preprocessing step of segmentation is performed to reduce
the area to be analyzed, segmenting the lungs and removing unimportant regions.
Generally, complex methods are developed to extract the lung region, also using
hand-made feature extractors to enhance segmentation. With the popularity of
deep learning techniques and its automated feature learning, we propose a lung
segmentation approach using fully convolutional networks (FCNs) combined with
fully connected conditional random fields (CRF), employed in many
state-of-the-art segmentation works. Aiming to develop a generalized approach,
the publicly available datasets from University Hospitals of Geneva (HUG) and
VESSEL12 challenge were studied, including many healthy and pathological CT
scans for evaluation. Experiments using the dataset individually, its trained
model on the other dataset and a combination of both datasets were employed.
Dice scores of for the HUG-ILD dataset and
for the VESSEL12 dataset were achieved, outperforming works
in the former and obtaining similar state-of-the-art results in the latter
dataset, showing the capability in using deep learning approaches.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
- …