7 research outputs found
Simultaneous synthesis of FLAIR and segmentation of white matter hypointensities from T1 MRIs
Segmenting vascular pathologies such as white matter lesions in Brain
magnetic resonance images (MRIs) require acquisition of multiple sequences such
as T1-weighted (T1-w) --on which lesions appear hypointense-- and fluid
attenuated inversion recovery (FLAIR) sequence --where lesions appear
hyperintense--. However, most of the existing retrospective datasets do not
consist of FLAIR sequences. Existing missing modality imputation methods
separate the process of imputation, and the process of segmentation. In this
paper, we propose a method to link both modality imputation and segmentation
using convolutional neural networks. We show that by jointly optimizing the
imputation network and the segmentation network, the method not only produces
more realistic synthetic FLAIR images from T1-w images, but also improves the
segmentation of WMH from T1-w images only.Comment: Conference on Medical Imaging with Deep Learning MIDL 201
Test-time unsupervised domain adaptation
Convolutional neural networks trained on publicly available medical imaging datasets (source domain) rarely generalise to different scanners or acquisition protocols (target domain). This motivates the active field of domain adaptation. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). Evaluating UDA methods consists of measuring the model’s ability to generalise to unseen data in the target domain. In this work, we argue that this is not as useful as adapting to the test set directly. We therefore propose an evaluation framework where we perform test-time UDA on each subject separately. We show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject. This result supports the thesis that unsupervised domain adaptation should be used at test-time, even if only using a single target-domain subject
Test-time Unsupervised Domain Adaptation
Convolutional neural networks trained on publicly available medical imaging
datasets (source domain) rarely generalise to different scanners or acquisition
protocols (target domain). This motivates the active field of domain
adaptation. While some approaches to the problem require labeled data from the
target domain, others adopt an unsupervised approach to domain adaptation
(UDA). Evaluating UDA methods consists of measuring the model's ability to
generalise to unseen data in the target domain. In this work, we argue that
this is not as useful as adapting to the test set directly. We therefore
propose an evaluation framework where we perform test-time UDA on each subject
separately. We show that models adapted to a specific target subject from the
target domain outperform a domain adaptation method which has seen more data of
the target domain but not this specific target subject. This result supports
the thesis that unsupervised domain adaptation should be used at test-time,
even if only using a single target-domain subjectComment: Accepted at MICCAI 202
Scribble-based Domain Adaptation via Co-segmentation
Although deep convolutional networks have reached state-of-the-art
performance in many medical image segmentation tasks, they have typically
demonstrated poor generalisation capability. To be able to generalise from one
domain (e.g. one imaging modality) to another, domain adaptation has to be
performed. While supervised methods may lead to good performance, they require
to fully annotate additional data which may not be an option in practice. In
contrast, unsupervised methods don't need additional annotations but are
usually unstable and hard to train. In this work, we propose a novel
weakly-supervised method. Instead of requiring detailed but time-consuming
annotations, scribbles on the target domain are used to perform domain
adaptation. This paper introduces a new formulation of domain adaptation based
on structured learning and co-segmentation. Our method is easy to train, thanks
to the introduction of a regularised loss. The framework is validated on
Vestibular Schwannoma segmentation (T1 to T2 scans). Our proposed method
outperforms unsupervised approaches and achieves comparable performance to a
fully-supervised approach.Comment: Accepted at MICCAI 202
Multi-domain adaptation in brain MRI through paired consistency and adversarial learning
Supervised learning algorithms trained on medical images will often fail to generalize across changes in acquisition parameters. Recent work in domain adaptation addresses this challenge and successfully leverages labeled data in a source domain to perform well on an unlabeled target domain. Inspired by recent work in semi-supervised learning we introduce a novel method to adapt from one source domain to n target domains (as long as there is paired data covering all domains). Our multi-domain adaptation method utilises a consistency loss combined with adversarial learning. We provide results on white matter lesion hyperintensity segmentation from brain MRIs using the MICCAI 2017 challenge data as the source domain and two target domains. The proposed method significantly outperforms other domain adaptation baselines