651 research outputs found
Reply to Lee and colleagues—Viral posterior uveitis
No abstract available
Factorised spatial representation learning: application in semi-supervised myocardial segmentation
The success and generalisation of deep learning algorithms heavily depend on
learning good feature representations. In medical imaging this entails
representing anatomical information, as well as properties related to the
specific imaging setting. Anatomical information is required to perform further
analysis, whereas imaging information is key to disentangle scanner variability
and potential artefacts. The ability to factorise these would allow for
training algorithms only on the relevant information according to the task. To
date, such factorisation has not been attempted. In this paper, we propose a
methodology of latent space factorisation relying on the cycle-consistency
principle. As an example application, we consider cardiac MR segmentation,
where we separate information related to the myocardium from other features
related to imaging and surrounding substructures. We demonstrate the proposed
method's utility in a semi-supervised setting: we use very few labelled images
together with many unlabelled images to train a myocardium segmentation neural
network. Specifically, we achieve comparable performance to fully supervised
networks using a fraction of labelled images in experiments on ACDC and a
dataset from Edinburgh Imaging Facility QMRI. Code will be made available at
https://github.com/agis85/spatial_factorisation.Comment: Accepted in MICCAI 201
Can a single image processing algorithm work equally well across all phases of DCE-MRI?
Image segmentation and registration are said to be challenging when applied
to dynamic contrast enhanced MRI sequences (DCE-MRI). The contrast agent causes
rapid changes in intensity in the region of interest and elsewhere, which can
lead to false positive predictions for segmentation tasks and confound the
image registration similarity metric. While it is widely assumed that contrast
changes increase the difficulty of these tasks, to our knowledge no work has
quantified these effects. In this paper we examine the effect of training with
different ratios of contrast enhanced (CE) data on two popular tasks:
segmentation with nnU-Net and Mask R-CNN and registration using VoxelMorph and
VTN. We experimented further by strategically using the available datasets
through pretraining and fine tuning with different splits of data. We found
that to create a generalisable model, pretraining with CE data and fine tuning
with non-CE data gave the best result. This interesting find could be expanded
to other deep learning based image processing tasks with DCE-MRI and provide
significant improvements to the models performance
Disentangled representation learning in cardiac image analysis
Typically, a medical image offers spatial information on the anatomy (and pathology) modulated by imaging specific characteristics. Many imaging modalities including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) can be interpreted in this way. We can venture further and consider that a medical image naturally factors into some spatial factors depicting anatomy and factors that denote the imaging characteristics. Here, we explicitly learn this decomposed (disentangled) representation of imaging data, focusing in particular on cardiac images. We propose Spatial Decomposition Network (SDNet), which factorises 2D medical images into spatial anatomical factors and non-spatial modality factors. We demonstrate that this high-level representation is ideally suited for several medical image analysis tasks, such as semi-supervised segmentation, multi-task segmentation and regression, and image-to-image synthesis. Specifically, we show that our model can match the performance of fully supervised segmentation models, using only a fraction of the labelled images. Critically, we show that our factorised representation also benefits from supervision obtained either when we use auxiliary tasks to train the model in a multi-task setting (e.g. regressing to known cardiac indices), or when aggregating multimodal data from different sources (e.g. pooling together MRI and CT data). To explore the properties of the learned factorisation, we perform latent-space arithmetic and show that we can synthesise CT from MR and vice versa, by swapping the modality factors. We also demonstrate that the factor holding image specific information can be used to predict the input modality with high accuracy. Code will be made available at https://github.com/agis85/anatomy_modality_decomposition
- …