2,406 research outputs found
A Geometric Variational Approach to Bayesian Inference
We propose a novel Riemannian geometric framework for variational inference
in Bayesian models based on the nonparametric Fisher-Rao metric on the manifold
of probability density functions. Under the square-root density representation,
the manifold can be identified with the positive orthant of the unit
hypersphere in L2, and the Fisher-Rao metric reduces to the standard L2 metric.
Exploiting such a Riemannian structure, we formulate the task of approximating
the posterior distribution as a variational problem on the hypersphere based on
the alpha-divergence. This provides a tighter lower bound on the marginal
distribution when compared to, and a corresponding upper bound unavailable
with, approaches based on the Kullback-Leibler divergence. We propose a novel
gradient-based algorithm for the variational problem based on Frechet
derivative operators motivated by the geometry of the Hilbert sphere, and
examine its properties. Through simulations and real-data applications, we
demonstrate the utility of the proposed geometric framework and algorithm on
several Bayesian models
Reparameterizing the Birkhoff Polytope for Variational Permutation Inference
Many matching, tracking, sorting, and ranking problems require probabilistic
reasoning about possible permutations, a set that grows factorially with
dimension. Combinatorial optimization algorithms may enable efficient point
estimation, but fully Bayesian inference poses a severe challenge in this
high-dimensional, discrete space. To surmount this challenge, we start with the
usual step of relaxing a discrete set (here, of permutation matrices) to its
convex hull, which here is the Birkhoff polytope: the set of all
doubly-stochastic matrices. We then introduce two novel transformations: first,
an invertible and differentiable stick-breaking procedure that maps
unconstrained space to the Birkhoff polytope; second, a map that rounds points
toward the vertices of the polytope. Both transformations include a temperature
parameter that, in the limit, concentrates the densities on permutation
matrices. We then exploit these transformations and reparameterization
gradients to introduce variational inference over permutation matrices, and we
demonstrate its utility in a series of experiments
HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network
We propose a framework called HyperVAE for encoding distributions of
distributions. When a target distribution is modeled by a VAE, its neural
network parameters \theta is drawn from a distribution p(\theta) which is
modeled by a hyper-level VAE. We propose a variational inference using Gaussian
mixture models to implicitly encode the parameters \theta into a low
dimensional Gaussian distribution. Given a target distribution, we predict the
posterior distribution of the latent code, then use a matrix-network decoder to
generate a posterior distribution q(\theta). HyperVAE can encode the parameters
\theta in full in contrast to common hyper-networks practices, which generate
only the scale and bias vectors as target-network parameters. Thus HyperVAE
preserves much more information about the model for each task in the latent
space. We discuss HyperVAE using the minimum description length (MDL) principle
and show that it helps HyperVAE to generalize. We evaluate HyperVAE in density
estimation tasks, outlier detection and discovery of novel design classes,
demonstrating its efficacy
- …