89 research outputs found
Sampling constrained probability distributions using Spherical Augmentation
Statistical models with constrained probability distributions are abundant in
machine learning. Some examples include regression models with norm constraints
(e.g., Lasso), probit, many copula models, and latent Dirichlet allocation
(LDA). Bayesian inference involving probability distributions confined to
constrained domains could be quite challenging for commonly used sampling
algorithms. In this paper, we propose a novel augmentation technique that
handles a wide range of constraints by mapping the constrained domain to a
sphere in the augmented space. By moving freely on the surface of this sphere,
sampling algorithms handle constraints implicitly and generate proposals that
remain within boundaries when mapped back to the original space. Our proposed
method, called {Spherical Augmentation}, provides a mathematically natural and
computationally efficient framework for sampling from constrained probability
distributions. We show the advantages of our method over state-of-the-art
sampling algorithms, such as exact Hamiltonian Monte Carlo, using several
examples including truncated Gaussian distributions, Bayesian Lasso, Bayesian
bridge regression, reconstruction of quantized stationary Gaussian process, and
LDA for topic modeling.Comment: 41 pages, 13 figure
Recommended from our members
In-sample forecasting: A brief review and new algorithms
Statistical methods often distinguish between in-sample and out-of-sample approaches. In particular this is the case when time is involved. Then often time series methods are proposed that extrapolate past patterns into the future via complicated recursion formulas. Standard statistical inference is on the other hand concerned with estimating parameters within the given sample. This review paper is about a statistical methodology, where all parameters are estimated in-sample while producing a forecast out-of-sample without recursion or extrapolation. A new super-simulation algorithm ensures a faster implementation of the simplest and perhaps most important version of in-sample forecasting
Microstructure Noise in the Continuous Case: The Pre-Averaging Approach - JLMPV-9
This paper presents a generalized pre-averaging approach for estimating the integrated volatility. This approach also provides consistent estimators of other powers of volatility in particular, it gives feasible ways to consistently estimate the asymptotic variance of the estimator of the integrated volatility. We show that our approach, which possess an intuitive transparency, can generate rate optimal estimators (with convergence rate n-1/4)
- …