939 research outputs found
A trust-region method for stochastic variational inference with applications to streaming data
Stochastic variational inference allows for fast posterior inference in
complex Bayesian models. However, the algorithm is prone to local optima which
can make the quality of the posterior approximation sensitive to the choice of
hyperparameters and initialization. We address this problem by replacing the
natural gradient step of stochastic varitional inference with a trust-region
update. We show that this leads to generally better results and reduced
sensitivity to hyperparameters. We also describe a new strategy for variational
inference on streaming data and show that here our trust-region method is
crucial for getting good performance.Comment: in Proceedings of the 32nd International Conference on Machine
Learning, 201
An Empirical Study of Stochastic Variational Algorithms for the Beta Bernoulli Process
Stochastic variational inference (SVI) is emerging as the most promising
candidate for scaling inference in Bayesian probabilistic models to large
datasets. However, the performance of these methods has been assessed primarily
in the context of Bayesian topic models, particularly latent Dirichlet
allocation (LDA). Deriving several new algorithms, and using synthetic, image
and genomic datasets, we investigate whether the understanding gleaned from LDA
applies in the setting of sparse latent factor models, specifically beta
process factor analysis (BPFA). We demonstrate that the big picture is
consistent: using Gibbs sampling within SVI to maintain certain posterior
dependencies is extremely effective. However, we find that different posterior
dependencies are important in BPFA relative to LDA. Particularly,
approximations able to model intra-local variable dependence perform best.Comment: ICML, 12 pages. Volume 37: Proceedings of The 32nd International
Conference on Machine Learning, 201
- …