27,712 research outputs found
Random projections for Bayesian regression
This article deals with random projections applied as a data reduction
technique for Bayesian regression analysis. We show sufficient conditions under
which the entire -dimensional distribution is approximately preserved under
random projections by reducing the number of data points from to in the case . Under mild
assumptions, we prove that evaluating a Gaussian likelihood function based on
the projected data instead of the original data yields a
-approximation in terms of the Wasserstein
distance. Our main result shows that the posterior distribution of Bayesian
linear regression is approximated up to a small error depending on only an
-fraction of its defining parameters. This holds when using
arbitrary Gaussian priors or the degenerate case of uniform distributions over
for . Our empirical evaluations involve different
simulated settings of Bayesian linear regression. Our experiments underline
that the proposed method is able to recover the regression model up to small
error while considerably reducing the total running time
A Computationally Efficient Projection-Based Approach for Spatial Generalized Linear Mixed Models
Inference for spatial generalized linear mixed models (SGLMMs) for
high-dimensional non-Gaussian spatial data is computationally intensive. The
computational challenge is due to the high-dimensional random effects and
because Markov chain Monte Carlo (MCMC) algorithms for these models tend to be
slow mixing. Moreover, spatial confounding inflates the variance of fixed
effect (regression coefficient) estimates. Our approach addresses both the
computational and confounding issues by replacing the high-dimensional spatial
random effects with a reduced-dimensional representation based on random
projections. Standard MCMC algorithms mix well and the reduced-dimensional
setting speeds up computations per iteration. We show, via simulated examples,
that Bayesian inference for this reduced-dimensional approach works well both
in terms of inference as well as prediction, our methods also compare favorably
to existing "reduced-rank" approaches. We also apply our methods to two real
world data examples, one on bird count data and the other classifying rock
types
Bayesian Compressed Regression
As an alternative to variable selection or shrinkage in high dimensional
regression, we propose to randomly compress the predictors prior to analysis.
This dramatically reduces storage and computational bottlenecks, performing
well when the predictors can be projected to a low dimensional linear subspace
with minimal loss of information about the response. As opposed to existing
Bayesian dimensionality reduction approaches, the exact posterior distribution
conditional on the compressed data is available analytically, speeding up
computation by many orders of magnitude while also bypassing robustness issues
due to convergence and mixing problems with MCMC. Model averaging is used to
reduce sensitivity to the random projection matrix, while accommodating
uncertainty in the subspace dimension. Strong theoretical support is provided
for the approach by showing near parametric convergence rates for the
predictive density in the large p small n asymptotic paradigm. Practical
performance relative to competitors is illustrated in simulations and real data
applications.Comment: 29 pages, 4 figure
Optimal projection of observations in a Bayesian setting
Optimal dimensionality reduction methods are proposed for the Bayesian
inference of a Gaussian linear model with additive noise in presence of
overabundant data. Three different optimal projections of the observations are
proposed based on information theory: the projection that minimizes the
Kullback-Leibler divergence between the posterior distributions of the original
and the projected models, the one that minimizes the expected Kullback-Leibler
divergence between the same distributions, and the one that maximizes the
mutual information between the parameter of interest and the projected
observations. The first two optimization problems are formulated as the
determination of an optimal subspace and therefore the solution is computed
using Riemannian optimization algorithms on the Grassmann manifold. Regarding
the maximization of the mutual information, it is shown that there exists an
optimal subspace that minimizes the entropy of the posterior distribution of
the reduced model; a basis of the subspace can be computed as the solution to a
generalized eigenvalue problem; an a priori error estimate on the mutual
information is available for this particular solution; and that the
dimensionality of the subspace to exactly conserve the mutual information
between the input and the output of the models is less than the number of
parameters to be inferred. Numerical applications to linear and nonlinear
models are used to assess the efficiency of the proposed approaches, and to
highlight their advantages compared to standard approaches based on the
principal component analysis of the observations
- …