2 research outputs found
The Unreasonable Effectiveness of Encoder-Decoder Networks for Retinal Vessel Segmentation
We propose an encoder-decoder framework for the segmentation of blood vessels
in retinal images that relies on the extraction of large-scale patches at
multiple image-scales during training. Experiments on three fundus image
datasets demonstrate that this approach achieves state-of-the-art results and
can be implemented using a simple and efficient fully-convolutional network
with a parameter count of less than 0.8M. Furthermore, we show that this
framework - called VLight - avoids overfitting to specific training images and
generalizes well across different datasets, which makes it highly suitable for
real-world applications where robustness, accuracy as well as low inference
time on high-resolution fundus images is required
Semi-supervised Medical Image Segmentation via Learning Consistency Under Transformations
The scarcity of labeled data often limits the application of supervised deep
learning techniques for medical image segmentation. This has motivated the
development of semi-supervised techniques that learn from a mixture of labeled
and unlabeled images. In this paper, we propose a novel semi-supervised method
that, in addition to supervised learning on labeled training images, learns to
predict segmentations consistent under a given class of transformations on both
labeled and unlabeled images. More specifically, in this work we explore
learning equivariance to elastic deformations. We implement this through: 1) a
Siamese architecture with two identical branches, each of which receives a
differently transformed image, and 2) a composite loss function with a
supervised segmentation loss term and an unsupervised term that encourages
segmentation consistency between the predictions of the two branches. We
evaluate the method on a public dataset of chest radiographs with segmentations
of anatomical structures using 5-fold cross-validation. The proposed method
reaches significantly higher segmentation accuracy compared to supervised
learning. This is due to learning transformation consistency on both labeled
and unlabeled images, with the latter contributing the most. We achieve the
performance comparable to state-of-the-art chest X-ray segmentation methods
while using substantially fewer labeled images