2,806 research outputs found
Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation
This paper presents a novel unsupervised domain adaptation framework, called
Synergistic Image and Feature Adaptation (SIFA), to effectively tackle the
problem of domain shift. Domain adaptation has become an important and hot
topic in recent studies on deep learning, aiming to recover performance
degradation when applying the neural networks to new testing domains. Our
proposed SIFA is an elegant learning diagram which presents synergistic fusion
of adaptations from both image and feature perspectives. In particular, we
simultaneously transform the appearance of images across domains and enhance
domain-invariance of the extracted features towards the segmentation task. The
feature encoder layers are shared by both perspectives to grasp their mutual
benefits during the end-to-end learning procedure. Without using any annotation
from the target domain, the learning of our unified model is guided by
adversarial losses, with multiple discriminators employed from various aspects.
We have extensively validated our method with a challenging application of
cross-modality medical image segmentation of cardiac structures. Experimental
results demonstrate that our SIFA model recovers the degraded performance from
17.2% to 73.0%, and outperforms the state-of-the-art methods by a significant
margin.Comment: AAAI 2019 (oral
MILD-Net: Minimal Information Loss Dilated Network for Gland Instance Segmentation in Colon Histology Images
The analysis of glandular morphology within colon histopathology images is an
important step in determining the grade of colon cancer. Despite the importance
of this task, manual segmentation is laborious, time-consuming and can suffer
from subjectivity among pathologists. The rise of computational pathology has
led to the development of automated methods for gland segmentation that aim to
overcome the challenges of manual segmentation. However, this task is
non-trivial due to the large variability in glandular appearance and the
difficulty in differentiating between certain glandular and non-glandular
histological structures. Furthermore, a measure of uncertainty is essential for
diagnostic decision making. To address these challenges, we propose a fully
convolutional neural network that counters the loss of information caused by
max-pooling by re-introducing the original image at multiple points within the
network. We also use atrous spatial pyramid pooling with varying dilation rates
for preserving the resolution and multi-level aggregation. To incorporate
uncertainty, we introduce random transformations during test time for an
enhanced segmentation result that simultaneously generates an uncertainty map,
highlighting areas of ambiguity. We show that this map can be used to define a
metric for disregarding predictions with high uncertainty. The proposed network
achieves state-of-the-art performance on the GlaS challenge dataset and on a
second independent colorectal adenocarcinoma dataset. In addition, we perform
gland instance segmentation on whole-slide images from two further datasets to
highlight the generalisability of our method. As an extension, we introduce
MILD-Net+ for simultaneous gland and lumen segmentation, to increase the
diagnostic power of the network.Comment: Initial version published at Medical Imaging with Deep Learning
(MIDL) 201
- …