41,224 research outputs found
Bayesian Inference on Matrix Manifolds for Linear Dimensionality Reduction
We reframe linear dimensionality reduction as a problem of Bayesian inference
on matrix manifolds. This natural paradigm extends the Bayesian framework to
dimensionality reduction tasks in higher dimensions with simpler models at
greater speeds. Here an orthogonal basis is treated as a single point on a
manifold and is associated with a linear subspace on which observations vary
maximally. Throughout this paper, we employ the Grassmann and Stiefel manifolds
for various dimensionality reduction problems, explore the connection between
the two manifolds, and use Hybrid Monte Carlo for posterior sampling on the
Grassmannian for the first time. We delineate in which situations either
manifold should be considered. Further, matrix manifold models are used to
yield scientific insight in the context of cognitive neuroscience, and we
conclude that our methods are suitable for basic inference as well as accurate
prediction.Comment: All datasets and computer programs are publicly available at
http://www.ics.uci.edu/~babaks/Site/Codes.htm
Bayesian Exponential Random Graph Models with Nodal Random Effects
We extend the well-known and widely used Exponential Random Graph Model
(ERGM) by including nodal random effects to compensate for heterogeneity in the
nodes of a network. The Bayesian framework for ERGMs proposed by Caimo and
Friel (2011) yields the basis of our modelling algorithm. A central question in
network models is the question of model selection and following the Bayesian
paradigm we focus on estimating Bayes factors. To do so we develop an
approximate but feasible calculation of the Bayes factor which allows one to
pursue model selection. Two data examples and a small simulation study
illustrate our mixed model approach and the corresponding model selection.Comment: 23 pages, 9 figures, 3 table
Bayesian model selection for exponential random graph models via adjusted pseudolikelihoods
Models with intractable likelihood functions arise in areas including network
analysis and spatial statistics, especially those involving Gibbs random
fields. Posterior parameter es timation in these settings is termed a
doubly-intractable problem because both the likelihood function and the
posterior distribution are intractable. The comparison of Bayesian models is
often based on the statistical evidence, the integral of the un-normalised
posterior distribution over the model parameters which is rarely available in
closed form. For doubly-intractable models, estimating the evidence adds
another layer of difficulty. Consequently, the selection of the model that best
describes an observed network among a collection of exponential random graph
models for network analysis is a daunting task. Pseudolikelihoods offer a
tractable approximation to the likelihood but should be treated with caution
because they can lead to an unreasonable inference. This paper specifies a
method to adjust pseudolikelihoods in order to obtain a reasonable, yet
tractable, approximation to the likelihood. This allows implementation of
widely used computational methods for evidence estimation and pursuit of
Bayesian model selection of exponential random graph models for the analysis of
social networks. Empirical comparisons to existing methods show that our
procedure yields similar evidence estimates, but at a lower computational cost.Comment: Supplementary material attached. To view attachments, please download
and extract the gzzipped source file listed under "Other formats
Fixed-Form Variational Posterior Approximation through Stochastic Linear Regression
We propose a general algorithm for approximating nonstandard Bayesian
posterior distributions. The algorithm minimizes the Kullback-Leibler
divergence of an approximating distribution to the intractable posterior
distribution. Our method can be used to approximate any posterior distribution,
provided that it is given in closed form up to the proportionality constant.
The approximation can be any distribution in the exponential family or any
mixture of such distributions, which means that it can be made arbitrarily
precise. Several examples illustrate the speed and accuracy of our
approximation method in practice
Patterns of Scalable Bayesian Inference
Datasets are growing not just in size but in complexity, creating a demand
for rich models and quantification of uncertainty. Bayesian methods are an
excellent fit for this demand, but scaling Bayesian inference is a challenge.
In response to this challenge, there has been considerable recent work based on
varying assumptions about model structure, underlying computational resources,
and the importance of asymptotic correctness. As a result, there is a zoo of
ideas with few clear overarching principles.
In this paper, we seek to identify unifying principles, patterns, and
intuitions for scaling Bayesian inference. We review existing work on utilizing
modern computing resources with both MCMC and variational approximation
techniques. From this taxonomy of ideas, we characterize the general principles
that have proven successful for designing scalable inference procedures and
comment on the path forward
Algebraic Bayesian analysis of contingency tables with possibly zero-probability cells
In this paper we consider a Bayesian analysis of contingency tables allowing
for the possibility that cells may have probability zero. In this sense we
depart from standard log-linear modeling that implicitly assumes a positivity
constraint. Our approach leads us to consider mixture models for contingency
tables, where the components of the mixture, which we call model-instances,
have distinct support. We rely on ideas from polynomial algebra in order to
identify the various model instances. We also provide a method to assign prior
probabilities to each instance of the model, as well as describing methods for
constructing priors on the parameter space of each instance. We illustrate our
methodology through a table involving two structural zeros, as
well as a zero count. The results we obtain show that our analysis may lead to
conclusions that are substantively different from those that would obtain in a
standard framework, wherein the possibility of zero-probability cells is not
explicitly accounted for
- …