53 research outputs found
Factorised spatial representation learning: application in semi-supervised myocardial segmentation
The success and generalisation of deep learning algorithms heavily depend on
learning good feature representations. In medical imaging this entails
representing anatomical information, as well as properties related to the
specific imaging setting. Anatomical information is required to perform further
analysis, whereas imaging information is key to disentangle scanner variability
and potential artefacts. The ability to factorise these would allow for
training algorithms only on the relevant information according to the task. To
date, such factorisation has not been attempted. In this paper, we propose a
methodology of latent space factorisation relying on the cycle-consistency
principle. As an example application, we consider cardiac MR segmentation,
where we separate information related to the myocardium from other features
related to imaging and surrounding substructures. We demonstrate the proposed
method's utility in a semi-supervised setting: we use very few labelled images
together with many unlabelled images to train a myocardium segmentation neural
network. Specifically, we achieve comparable performance to fully supervised
networks using a fraction of labelled images in experiments on ACDC and a
dataset from Edinburgh Imaging Facility QMRI. Code will be made available at
https://github.com/agis85/spatial_factorisation.Comment: Accepted in MICCAI 201
Challenges in Disentangling Independent Factors of Variation
We study the problem of building models that disentangle independent factors
of variation. Such models could be used to encode features that can efficiently
be used for classification and to transfer attributes between different images
in image synthesis. As data we use a weakly labeled training set. Our weak
labels indicate what single factor has changed between two data samples,
although the relative value of the change is unknown. This labeling is of
particular interest as it may be readily available without annotation costs. To
make use of weak labels we introduce an autoencoder model and train it through
constraints on image pairs and triplets. We formally prove that without
additional knowledge there is no guarantee that two images with the same factor
of variation will be mapped to the same feature. We call this issue the
reference ambiguity. Moreover, we show the role of the feature dimensionality
and adversarial training. We demonstrate experimentally that the proposed model
can successfully transfer attributes on several datasets, but show also cases
when the reference ambiguity occurs.Comment: Submitted to ICLR 201
- …