5 research outputs found
MILD-Net: Minimal Information Loss Dilated Network for Gland Instance Segmentation in Colon Histology Images
The analysis of glandular morphology within colon histopathology images is an
important step in determining the grade of colon cancer. Despite the importance
of this task, manual segmentation is laborious, time-consuming and can suffer
from subjectivity among pathologists. The rise of computational pathology has
led to the development of automated methods for gland segmentation that aim to
overcome the challenges of manual segmentation. However, this task is
non-trivial due to the large variability in glandular appearance and the
difficulty in differentiating between certain glandular and non-glandular
histological structures. Furthermore, a measure of uncertainty is essential for
diagnostic decision making. To address these challenges, we propose a fully
convolutional neural network that counters the loss of information caused by
max-pooling by re-introducing the original image at multiple points within the
network. We also use atrous spatial pyramid pooling with varying dilation rates
for preserving the resolution and multi-level aggregation. To incorporate
uncertainty, we introduce random transformations during test time for an
enhanced segmentation result that simultaneously generates an uncertainty map,
highlighting areas of ambiguity. We show that this map can be used to define a
metric for disregarding predictions with high uncertainty. The proposed network
achieves state-of-the-art performance on the GlaS challenge dataset and on a
second independent colorectal adenocarcinoma dataset. In addition, we perform
gland instance segmentation on whole-slide images from two further datasets to
highlight the generalisability of our method. As an extension, we introduce
MILD-Net+ for simultaneous gland and lumen segmentation, to increase the
diagnostic power of the network.Comment: Initial version published at Medical Imaging with Deep Learning
(MIDL) 201
Micro-Net: A unified model for segmentation of various objects in microscopy images
Object segmentation and structure localization are important steps in
automated image analysis pipelines for microscopy images. We present a
convolution neural network (CNN) based deep learning architecture for
segmentation of objects in microscopy images. The proposed network can be used
to segment cells, nuclei and glands in fluorescence microscopy and histology
images after slight tuning of input parameters. The network trains at multiple
resolutions of the input image, connects the intermediate layers for better
localization and context and generates the output using multi-resolution
deconvolution filters. The extra convolutional layers which bypass the
max-pooling operation allow the network to train for variable input intensities
and object size and make it robust to noisy data. We compare our results on
publicly available data sets and show that the proposed network outperforms
recent deep learning algorithms
Tumor segmentation in whole slide images using persistent homology and deep convolutional features
This paper presents a novel automated tumor segmentation approach for Hematoxylin & Eosin stained histology images. The proposed method enhances the segmentation performance by combining the topological and convolution neural network (CNN) features. Our approach is based on 3 steps: (1) construct enhanced persistent homology profiles by using topological features; (2) train a CNN to extract convolutional features; (3) employ a multi-stage ensemble strategy to combine Random Forest regression models. The experimental results demonstrate that proposed method outperforms the conventional CNN