3 research outputs found
Adaptive Diffusion Priors for Accelerated MRI Reconstruction
Deep MRI reconstruction is commonly performed with conditional models that
de-alias undersampled acquisitions to recover images consistent with
fully-sampled data. Since conditional models are trained with knowledge of the
imaging operator, they can show poor generalization across variable operators.
Unconditional models instead learn generative image priors decoupled from the
imaging operator to improve reliability against domain shifts. Recent diffusion
models are particularly promising given their high sample fidelity.
Nevertheless, inference with a static image prior can perform suboptimally.
Here we propose the first adaptive diffusion prior for MRI reconstruction,
AdaDiff, to improve performance and reliability against domain shifts. AdaDiff
leverages an efficient diffusion prior trained via adversarial mapping over
large reverse diffusion steps. A two-phase reconstruction is executed following
training: a rapid-diffusion phase that produces an initial reconstruction with
the trained prior, and an adaptation phase that further refines the result by
updating the prior to minimize reconstruction loss on acquired data.
Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff
outperforms competing conditional and unconditional methods under domain
shifts, and achieves superior or on par within-domain performance
Unsupervised Medical Image Translation with Adversarial Diffusion Models
Imputation of missing images via source-to-target modality translation can
improve diversity in medical imaging protocols. A pervasive approach for
synthesizing target images involves one-shot mapping through generative
adversarial networks (GAN). Yet, GAN models that implicitly characterize the
image distribution can suffer from limited sample fidelity. Here, we propose a
novel method based on adversarial diffusion modeling, SynDiff, for improved
performance in medical image translation. To capture a direct correlate of the
image distribution, SynDiff leverages a conditional diffusion process that
progressively maps noise and source images onto the target image. For fast and
accurate image sampling during inference, large diffusion steps are taken with
adversarial projections in the reverse diffusion direction. To enable training
on unpaired datasets, a cycle-consistent architecture is devised with coupled
diffusive and non-diffusive modules that bilaterally translate between two
modalities. Extensive assessments are reported on the utility of SynDiff
against competing GAN and diffusion models in multi-contrast MRI and MRI-CT
translation. Our demonstrations indicate that SynDiff offers quantitatively and
qualitatively superior performance against competing baselines.Comment: M. Ozbey and O. Dalmaz contributed equally to this stud
Learning Fourier-Constrained Diffusion Bridges for MRI Reconstruction
Recent years have witnessed a surge in deep generative models for accelerated
MRI reconstruction. Diffusion priors in particular have gained traction with
their superior representational fidelity and diversity. Instead of the target
transformation from undersampled to fully-sampled data, common diffusion priors
are trained to learn a multi-step transformation from Gaussian noise onto
fully-sampled data. During inference, data-fidelity projections are injected in
between reverse diffusion steps to reach a compromise solution within the span
of both the diffusion prior and the imaging operator. Unfortunately, suboptimal
solutions can arise as the normality assumption of the diffusion prior causes
divergence between learned and target transformations. To address this
limitation, here we introduce the first diffusion bridge for accelerated MRI
reconstruction. The proposed Fourier-constrained diffusion bridge (FDB)
leverages a generalized process to transform between undersampled and
fully-sampled data via random noise addition and random frequency removal as
degradation operators. Unlike common diffusion priors that use an asymptotic
endpoint based on Gaussian noise, FDB captures a transformation between finite
endpoints where the initial endpoint is based on moderate degradation of
fully-sampled data. Demonstrations on brain MRI indicate that FDB outperforms
state-of-the-art reconstruction methods including conventional diffusion
priors