141 research outputs found
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
Whole-body PET image denoising for reduced acquisition time
This paper evaluates the performance of supervised and unsupervised deep
learning models for denoising positron emission tomography (PET) images in the
presence of reduced acquisition times. Our experiments consider 212 studies
(56908 images), and evaluate the models using 2D (RMSE, SSIM) and 3D (SUVpeak
and SUVmax error for the regions of interest) metrics. It was shown that, in
contrast to previous studies, supervised models (ResNet, Unet, SwinIR)
outperform unsupervised models (pix2pix GAN and CycleGAN with ResNet backbone
and various auxiliary losses) in the reconstruction of 2D PET images. Moreover,
a hybrid approach of supervised CycleGAN shows the best results in SUVmax
estimation for denoised images, and the SUVmax estimation error for denoised
images is comparable with the PET reproducibility error
A comparative study between paired and unpaired Image Quality Assessment in Low-Dose CT Denoising
The current deep learning approaches for low-dose CT denoising can be divided
into paired and unpaired methods. The former involves the use of well-paired
datasets, whilst the latter relaxes this constraint. The large availability of
unpaired datasets has raised the interest in deepening unpaired denoising
strategies that, in turn, need for robust evaluation techniques going beyond
the qualitative evaluation. To this end, we can use quantitative image quality
assessment scores that we divided into two categories, i.e., paired and
unpaired measures. However, the interpretation of unpaired metrics is not
straightforward, also because the consistency with paired metrics has not been
fully investigated. To cope with this limitation, in this work we consider 15
paired and unpaired scores, which we applied to assess the performance of
low-dose CT denoising. We perform an in-depth statistical analysis that not
only studies the correlation between paired and unpaired metrics but also
within each category. This brings out useful guidelines that can help
researchers and practitioners select the right measure for their applications
Unsupervised Medical Image Translation with Adversarial Diffusion Models
Imputation of missing images via source-to-target modality translation can
improve diversity in medical imaging protocols. A pervasive approach for
synthesizing target images involves one-shot mapping through generative
adversarial networks (GAN). Yet, GAN models that implicitly characterize the
image distribution can suffer from limited sample fidelity. Here, we propose a
novel method based on adversarial diffusion modeling, SynDiff, for improved
performance in medical image translation. To capture a direct correlate of the
image distribution, SynDiff leverages a conditional diffusion process that
progressively maps noise and source images onto the target image. For fast and
accurate image sampling during inference, large diffusion steps are taken with
adversarial projections in the reverse diffusion direction. To enable training
on unpaired datasets, a cycle-consistent architecture is devised with coupled
diffusive and non-diffusive modules that bilaterally translate between two
modalities. Extensive assessments are reported on the utility of SynDiff
against competing GAN and diffusion models in multi-contrast MRI and MRI-CT
translation. Our demonstrations indicate that SynDiff offers quantitatively and
qualitatively superior performance against competing baselines.Comment: M. Ozbey and O. Dalmaz contributed equally to this stud
- …