687 research outputs found
Test-time Unsupervised Domain Adaptation
Convolutional neural networks trained on publicly available medical imaging
datasets (source domain) rarely generalise to different scanners or acquisition
protocols (target domain). This motivates the active field of domain
adaptation. While some approaches to the problem require labeled data from the
target domain, others adopt an unsupervised approach to domain adaptation
(UDA). Evaluating UDA methods consists of measuring the model's ability to
generalise to unseen data in the target domain. In this work, we argue that
this is not as useful as adapting to the test set directly. We therefore
propose an evaluation framework where we perform test-time UDA on each subject
separately. We show that models adapted to a specific target subject from the
target domain outperform a domain adaptation method which has seen more data of
the target domain but not this specific target subject. This result supports
the thesis that unsupervised domain adaptation should be used at test-time,
even if only using a single target-domain subjectComment: Accepted at MICCAI 202
Test-time unsupervised domain adaptation
Convolutional neural networks trained on publicly available medical imaging datasets (source domain) rarely generalise to different scanners or acquisition protocols (target domain). This motivates the active field of domain adaptation. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). Evaluating UDA methods consists of measuring the model’s ability to generalise to unseen data in the target domain. In this work, we argue that this is not as useful as adapting to the test set directly. We therefore propose an evaluation framework where we perform test-time UDA on each subject separately. We show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject. This result supports the thesis that unsupervised domain adaptation should be used at test-time, even if only using a single target-domain subject
Unsupervised Medical Image Translation with Adversarial Diffusion Models
Imputation of missing images via source-to-target modality translation can
improve diversity in medical imaging protocols. A pervasive approach for
synthesizing target images involves one-shot mapping through generative
adversarial networks (GAN). Yet, GAN models that implicitly characterize the
image distribution can suffer from limited sample fidelity. Here, we propose a
novel method based on adversarial diffusion modeling, SynDiff, for improved
performance in medical image translation. To capture a direct correlate of the
image distribution, SynDiff leverages a conditional diffusion process that
progressively maps noise and source images onto the target image. For fast and
accurate image sampling during inference, large diffusion steps are taken with
adversarial projections in the reverse diffusion direction. To enable training
on unpaired datasets, a cycle-consistent architecture is devised with coupled
diffusive and non-diffusive modules that bilaterally translate between two
modalities. Extensive assessments are reported on the utility of SynDiff
against competing GAN and diffusion models in multi-contrast MRI and MRI-CT
translation. Our demonstrations indicate that SynDiff offers quantitatively and
qualitatively superior performance against competing baselines.Comment: M. Ozbey and O. Dalmaz contributed equally to this stud
- …