48,840 research outputs found

    Smoothed Gradients for Stochastic Variational Inference

    Full text link
    Stochastic variational inference (SVI) lets us scale up Bayesian computation to massive data. It uses stochastic optimization to fit a variational distribution, following easy-to-compute noisy natural gradients. As with most traditional stochastic optimization methods, SVI takes precautions to use unbiased stochastic gradients whose expectations are equal to the true gradients. In this paper, we explore the idea of following biased stochastic gradients in SVI. Our method replaces the natural gradient with a similarly constructed vector that uses a fixed-window moving average of some of its previous terms. We will demonstrate the many advantages of this technique. First, its computational cost is the same as for SVI and storage requirements only multiply by a constant factor. Second, it enjoys significant variance reduction over the unbiased estimates, smaller bias than averaged gradients, and leads to smaller mean-squared error against the full gradient. We test our method on latent Dirichlet allocation with three large corpora.Comment: Appears in Neural Information Processing Systems, 201

    Necessary conditions for continuous parameter stochastic optimization problems

    Get PDF
    Abstract variational theory application to continuous parameter stochastic optimization problems to derive maximum principles in linear programmin
    corecore