21,383 research outputs found

    A Fast Stochastic Plug-and-Play ADMM for Imaging Inverse Problems

    Get PDF
    In this work we propose an efficient stochastic plug-and-play (PnP) algorithm for imaging inverse problems. The PnP stochastic gradient descent methods have been recently proposed and shown improved performance in some imaging applications over standard deterministic PnP methods. However, current stochastic PnP methods need to frequently compute the image denoisers which can be computationally expensive. To overcome this limitation, we propose a new stochastic PnP-ADMM method which is based on introducing stochastic gradient descent inner-loops within an inexact ADMM framework. We provide the theoretical guarantee on the fixed-point convergence for our algorithm under standard assumptions. Our numerical results demonstrate the effectiveness of our approach compared with state-of-the-art PnP methods

    Scalable Peaceman-Rachford Splitting Method with Proximal Terms

    Full text link
    Along with developing of Peaceman-Rachford Splittling Method (PRSM), many batch algorithms based on it have been studied very deeply. But almost no algorithm focused on the performance of stochastic version of PRSM. In this paper, we propose a new stochastic algorithm based on PRSM, prove its convergence rate in ergodic sense, and test its performance on both artificial and real data. We show that our proposed algorithm, Stochastic Scalable PRSM (SS-PRSM), enjoys the O(1/K)O(1/K) convergence rate, which is the same as those newest stochastic algorithms that based on ADMM but faster than general Stochastic ADMM (which is O(1/K)O(1/\sqrt{K})). Our algorithm also owns wide flexibility, outperforms many state-of-the-art stochastic algorithms coming from ADMM, and has low memory cost in large-scale splitting optimization problems

    Bayesian Dark Knowledge

    Get PDF
    We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities, e.g., for applications involving bandits or active learning. One simple approach to this is to use online Monte Carlo methods, such as SGLD (stochastic gradient Langevin dynamics). Unfortunately, such a method needs to store many copies of the parameters (which wastes memory), and needs to make predictions using many versions of the model (which wastes time). We describe a method for "distilling" a Monte Carlo approximation to the posterior predictive density into a more compact form, namely a single deep neural network. We compare to two very recent approaches to Bayesian neural networks, namely an approach based on expectation propagation [Hernandez-Lobato and Adams, 2015] and an approach based on variational Bayes [Blundell et al., 2015]. Our method performs better than both of these, is much simpler to implement, and uses less computation at test time.Comment: final version submitted to NIPS 201
    • …
    corecore