452 research outputs found
Matching aggregate posteriors in the variational autoencoder
The variational autoencoder (VAE) is a well-studied, deep, latent-variable
model (DLVM) that efficiently optimizes the variational lower bound of the log
marginal data likelihood and has a strong theoretical foundation. However, the
VAE's known failure to match the aggregate posterior often results in
\emph{pockets/holes} in the latent distribution (i.e., a failure to match the
prior) and/or \emph{posterior collapse}, which is associated with a loss of
information in the latent space. This paper addresses these shortcomings in
VAEs by reformulating the objective function associated with VAEs in order to
match the aggregate/marginal posterior distribution to the prior. We use kernel
density estimate (KDE) to model the aggregate posterior in high dimensions. The
proposed method is named the \emph{aggregate variational autoencoder} (AVAE)
and is built on the theoretical framework of the VAE. Empirical evaluation of
the proposed method on multiple benchmark data sets demonstrates the
effectiveness of the AVAE relative to state-of-the-art (SOTA) methods
Variational Autoencoders with Riemannian Brownian Motion Priors
Variational Autoencoders (VAEs) represent the given data in a low-dimensional
latent space, which is generally assumed to be Euclidean. This assumption
naturally leads to the common choice of a standard Gaussian prior over
continuous latent variables. Recent work has, however, shown that this prior
has a detrimental effect on model capacity, leading to subpar performance. We
propose that the Euclidean assumption lies at the heart of this failure mode.
To counter this, we assume a Riemannian structure over the latent space, which
constitutes a more principled geometric view of the latent codes, and replace
the standard Gaussian prior with a Riemannian Brownian motion prior. We propose
an efficient inference scheme that does not rely on the unknown normalizing
factor of this prior. Finally, we demonstrate that this prior significantly
increases model capacity using only one additional scalar parameter.Comment: Published in ICML 202
Amortized Bayesian Inference of GISAXS Data with Normalizing Flows
Grazing-Incidence Small-Angle X-ray Scattering (GISAXS) is a modern imaging
technique used in material research to study nanoscale materials.
Reconstruction of the parameters of an imaged object imposes an ill-posed
inverse problem that is further complicated when only an in-plane GISAXS signal
is available. Traditionally used inference algorithms such as Approximate
Bayesian Computation (ABC) rely on computationally expensive scattering
simulation software, rendering analysis highly time-consuming. We propose a
simulation-based framework that combines variational auto-encoders and
normalizing flows to estimate the posterior distribution of object parameters
given its GISAXS data. We apply the inference pipeline to experimental data and
demonstrate that our method reduces the inference cost by orders of magnitude
while producing consistent results with ABC
- …