26 research outputs found

    Plug-and-Play Methods Provably Converge with Properly Trained Denoisers

    Full text link
    Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose real spectral normalization, a technique for training deep learning-based denoisers to satisfy the proposed Lipschitz condition. Finally, we present experimental results validating the theory.Comment: Published in the International Conference on Machine Learning, 201

    Provable Convergence of Plug-and-Play Priors with MMSE denoisers

    Full text link
    Plug-and-play priors (PnP) is a methodology for regularized image reconstruction that specifies the prior through an image denoiser. While PnP algorithms are well understood for denoisers performing maximum a posteriori probability (MAP) estimation, they have not been analyzed for the minimum mean squared error (MMSE) denoisers. This letter addresses this gap by establishing the first theoretical convergence result for the iterative shrinkage/thresholding algorithm (ISTA) variant of PnP for MMSE denoisers. We show that the iterates produced by PnP-ISTA with an MMSE denoiser converge to a stationary point of some global cost function. We validate our analysis on sparse signal recovery in compressive sensing by comparing two types of denoisers, namely the exact MMSE denoiser and the approximate MMSE denoiser obtained by training a deep neural net

    On the Construction of Averaged Deep Denoisers for Image Regularization

    Full text link
    Plug-and-Play (PnP) and Regularization by Denoising (RED) are recent paradigms for image reconstruction that can leverage the power of modern denoisers for image regularization. In particular, these algorithms have been shown to deliver state-of-the-art reconstructions using CNN denoisers. Since the regularization is performed in an ad-hoc manner in PnP and RED, understanding their convergence has been an active research area. Recently, it was observed in many works that iterate convergence of PnP and RED can be guaranteed if the denoiser is averaged or nonexpansive. However, integrating nonexpansivity with gradient-based learning is a challenging task -- checking nonexpansivity is known to be computationally intractable. Using numerical examples, we show that existing CNN denoisers violate the nonexpansive property and can cause the PnP iterations to diverge. In fact, algorithms for training nonexpansive denoisers either cannot guarantee nonexpansivity of the final denoiser or are computationally intensive. In this work, we propose to construct averaged (contractive) image denoisers by unfolding ISTA and ADMM iterations applied to wavelet denoising and demonstrate that their regularization capacity for PnP and RED can be matched with CNN denoisers. To the best of our knowledge, this is the first work to propose a simple framework for training provably averaged (contractive) denoisers using unfolding networks

    Provably Convergent Plug-and-Play Quasi-Newton Methods

    Get PDF
    Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality

    Provably Convergent Plug-and-Play Quasi-Newton Methods

    Get PDF
    Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality

    Provably Convergent Plug-and-Play Quasi-Newton Methods

    Full text link
    Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate significantly faster convergence as compared to other provable PnP methods with similar convergence results
    corecore