25 research outputs found
A plug-and-play synthetic data deep learning for undersampled magnetic resonance image reconstruction
Magnetic resonance imaging (MRI) plays an important role in modern medical
diagnostic but suffers from prolonged scan time. Current deep learning methods
for undersampled MRI reconstruction exhibit good performance in image
de-aliasing which can be tailored to the specific kspace undersampling
scenario. But it is very troublesome to configure different deep networks when
the sampling setting changes. In this work, we propose a deep plug-and-play
method for undersampled MRI reconstruction, which effectively adapts to
different sampling settings. Specifically, the image de-aliasing prior is first
learned by a deep denoiser trained to remove general white Gaussian noise from
synthetic data. Then the learned deep denoiser is plugged into an iterative
algorithm for image reconstruction. Results on in vivo data demonstrate that
the proposed method provides nice and robust accelerated image reconstruction
performance under different undersampling patterns and sampling rates, both
visually and quantitatively.Comment: 5 pages, 3 figure
A Variational Inequality Model for Learning Neural Networks
Neural networks have become ubiquitous tools for solving signal and image
processing problems, and they often outperform standard approaches.
Nevertheless, training neural networks is a challenging task in many
applications. The prevalent training procedure consists of minimizing highly
non-convex objectives based on data sets of huge dimension. In this context,
current methodologies are not guaranteed to produce global solutions. We
present an alternative approach which foregoes the optimization framework and
adopts a variational inequality formalism. The associated algorithm guarantees
convergence of the iterates to a true solution of the variational inequality
and it possesses an efficient block-iterative structure. A numerical
application is presented
Image Restoration using Plug-and-Play CNN MAP Denoisers
Plug-and-play denoisers can be used to perform generic image restoration
tasks independent of the degradation type. These methods build on the fact that
the Maximum a Posteriori (MAP) optimization can be solved using smaller
sub-problems, including a MAP denoising optimization. We present the first
end-to-end approach to MAP estimation for image denoising using deep neural
networks. We show that our method is guaranteed to minimize the MAP denoising
objective, which is then used in an optimization algorithm for generic image
restoration. We provide theoretical analysis of our approach and show the
quantitative performance of our method in several experiments. Our experimental
results show that the proposed method can achieve 70x faster performance
compared to the state-of-the-art, while maintaining the theoretical perspective
of MAP.Comment: Code and models available at
https://github.com/DawyD/cnn-map-denoiser . Accepted for publication in
VISAPP 202
On the Contractivity of Plug-and-Play Operators
In plug-and-play (PnP) regularization, the proximal operator in algorithms
such as ISTA and ADMM is replaced by a powerful denoiser. This formal
substitution works surprisingly well in practice. In fact, PnP has been shown
to give state-of-the-art results for various imaging applications. The
empirical success of PnP has motivated researchers to understand its
theoretical underpinnings and, in particular, its convergence. It was shown in
prior work that for kernel denoisers such as the nonlocal means, PnP-ISTA
provably converges under some strong assumptions on the forward model. The
present work is motivated by the following questions: Can we relax the
assumptions on the forward model? Can the convergence analysis be extended to
PnP-ADMM? Can we estimate the convergence rate? In this letter, we resolve
these questions using the contraction mapping theorem: (i) for symmetric
denoisers, we show that (under mild conditions) PnP-ISTA and PnP-ADMM exhibit
linear convergence; and (ii) for kernel denoisers, we show that PnP-ISTA and
PnP-ADMM converge linearly for image inpainting. We validate our theoretical
findings using reconstruction experiments.Comment: Errors in the proof of Lemma 1 and the statement of Theorem 2 were
identified after the publication; these have been rectified in the revised
version (v2