26 research outputs found
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Plug-and-play (PnP) is a non-convex framework that integrates modern
denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or
other proximal algorithms. An advantage of PnP is that one can use pre-trained
denoisers when there is not sufficient data for end-to-end training. Although
PnP has been recently studied extensively with great empirical success,
theoretical analysis addressing even the most basic question of convergence has
been insufficient. In this paper, we theoretically establish convergence of
PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain
Lipschitz condition on the denoisers. We then propose real spectral
normalization, a technique for training deep learning-based denoisers to
satisfy the proposed Lipschitz condition. Finally, we present experimental
results validating the theory.Comment: Published in the International Conference on Machine Learning, 201
Provable Convergence of Plug-and-Play Priors with MMSE denoisers
Plug-and-play priors (PnP) is a methodology for regularized image
reconstruction that specifies the prior through an image denoiser. While PnP
algorithms are well understood for denoisers performing maximum a posteriori
probability (MAP) estimation, they have not been analyzed for the minimum mean
squared error (MMSE) denoisers. This letter addresses this gap by establishing
the first theoretical convergence result for the iterative
shrinkage/thresholding algorithm (ISTA) variant of PnP for MMSE denoisers. We
show that the iterates produced by PnP-ISTA with an MMSE denoiser converge to a
stationary point of some global cost function. We validate our analysis on
sparse signal recovery in compressive sensing by comparing two types of
denoisers, namely the exact MMSE denoiser and the approximate MMSE denoiser
obtained by training a deep neural net
Recommended from our members
Learning-based Optimization for Signal and Image Processing
Incorporating machine learning techniques into optimization problems and solvers attracts increasing attention. Given a particular type of optimization problem that needs to be solved repeatedly, machine learning techniques can find some features for this category of optimization and develop algorithms with excellent performance. This thesis deals with algorithms and convergence analysis in learning-based optimization in three aspects: learning dictionaries, learning optimization solvers and learning regularizers.Learning dictionaries for sparse coding is significant for signal processing. Convolutional sparse coding is a form of sparse coding with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in the batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage, and severely limits the training data size that can be used. I proposed two online convolutional dictionary learning algorithms that offered far better scaling of memory and computational cost than batch methods and provided a rigorous theoretical analysis of these methods.Learning fast solvers for optimization is a rising research topic. In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. I studied unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery and established its convergence. Based on the properties of parameters required by convergence, the model can be significantly simplified and, consequently, has much less training cost and better recovery performance.Learning regularizers or priors improves the performance of optimization solvers, especially for signal and image processing tasks. Plug-and-play (PnP) is a non-convex framework that integrates modern priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this thesis, the theoretical convergence of PnP-FBS and PnP-ADMM was established, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. Furthermore, real spectral normalization was proposed for training deep learning-based denoisers to satisfy the proposed Lipschitz condition
On the Construction of Averaged Deep Denoisers for Image Regularization
Plug-and-Play (PnP) and Regularization by Denoising (RED) are recent
paradigms for image reconstruction that can leverage the power of modern
denoisers for image regularization. In particular, these algorithms have been
shown to deliver state-of-the-art reconstructions using CNN denoisers. Since
the regularization is performed in an ad-hoc manner in PnP and RED,
understanding their convergence has been an active research area. Recently, it
was observed in many works that iterate convergence of PnP and RED can be
guaranteed if the denoiser is averaged or nonexpansive. However, integrating
nonexpansivity with gradient-based learning is a challenging task -- checking
nonexpansivity is known to be computationally intractable. Using numerical
examples, we show that existing CNN denoisers violate the nonexpansive property
and can cause the PnP iterations to diverge. In fact, algorithms for training
nonexpansive denoisers either cannot guarantee nonexpansivity of the final
denoiser or are computationally intensive. In this work, we propose to
construct averaged (contractive) image denoisers by unfolding ISTA and ADMM
iterations applied to wavelet denoising and demonstrate that their
regularization capacity for PnP and RED can be matched with CNN denoisers. To
the best of our knowledge, this is the first work to propose a simple framework
for training provably averaged (contractive) denoisers using unfolding
networks
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality
Provably Convergent Plug-and-Play Quasi-Newton Methods
Plug-and-Play (PnP) methods are a class of efficient iterative methods that
aim to combine data fidelity terms and deep denoisers using classical
optimization algorithms, such as ISTA or ADMM. Provable PnP methods are a
subclass of PnP methods with convergence guarantees, such as fixed point
convergence or convergence to critical points of some energy function. Many
existing provable PnP methods impose heavy restrictions on the denoiser or
fidelity function, such as non-expansiveness or strict convexity, respectively.
In this work, we propose a novel algorithmic approach incorporating
quasi-Newton steps into a provable PnP framework based on proximal denoisers,
resulting in greatly accelerated convergence while retaining light assumptions
on the denoiser. By characterizing the denoiser as the proximal operator of a
weakly convex function, we show that the fixed points of the proposed
quasi-Newton PnP algorithm are critical points of a weakly convex function.
Numerical experiments on image deblurring and super-resolution demonstrate
significantly faster convergence as compared to other provable PnP methods with
similar convergence results