2 research outputs found
Adaptive Diffusion Priors for Accelerated MRI Reconstruction
Deep MRI reconstruction is commonly performed with conditional models that
de-alias undersampled acquisitions to recover images consistent with
fully-sampled data. Since conditional models are trained with knowledge of the
imaging operator, they can show poor generalization across variable operators.
Unconditional models instead learn generative image priors decoupled from the
imaging operator to improve reliability against domain shifts. Recent diffusion
models are particularly promising given their high sample fidelity.
Nevertheless, inference with a static image prior can perform suboptimally.
Here we propose the first adaptive diffusion prior for MRI reconstruction,
AdaDiff, to improve performance and reliability against domain shifts. AdaDiff
leverages an efficient diffusion prior trained via adversarial mapping over
large reverse diffusion steps. A two-phase reconstruction is executed following
training: a rapid-diffusion phase that produces an initial reconstruction with
the trained prior, and an adaptation phase that further refines the result by
updating the prior to minimize reconstruction loss on acquired data.
Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff
outperforms competing conditional and unconditional methods under domain
shifts, and achieves superior or on par within-domain performance
Learning Fourier-Constrained Diffusion Bridges for MRI Reconstruction
Recent years have witnessed a surge in deep generative models for accelerated
MRI reconstruction. Diffusion priors in particular have gained traction with
their superior representational fidelity and diversity. Instead of the target
transformation from undersampled to fully-sampled data, common diffusion priors
are trained to learn a multi-step transformation from Gaussian noise onto
fully-sampled data. During inference, data-fidelity projections are injected in
between reverse diffusion steps to reach a compromise solution within the span
of both the diffusion prior and the imaging operator. Unfortunately, suboptimal
solutions can arise as the normality assumption of the diffusion prior causes
divergence between learned and target transformations. To address this
limitation, here we introduce the first diffusion bridge for accelerated MRI
reconstruction. The proposed Fourier-constrained diffusion bridge (FDB)
leverages a generalized process to transform between undersampled and
fully-sampled data via random noise addition and random frequency removal as
degradation operators. Unlike common diffusion priors that use an asymptotic
endpoint based on Gaussian noise, FDB captures a transformation between finite
endpoints where the initial endpoint is based on moderate degradation of
fully-sampled data. Demonstrations on brain MRI indicate that FDB outperforms
state-of-the-art reconstruction methods including conventional diffusion
priors