240 research outputs found
Resampled Priors for Variational Autoencoders
We propose Learned Accept/Reject Sampling (LARS), a method for constructing
richer priors using rejection sampling with a learned acceptance function. This
work is motivated by recent analyses of the VAE objective, which pointed out
that commonly used simple priors can lead to underfitting. As the distribution
induced by LARS involves an intractable normalizing constant, we show how to
estimate it and its gradients efficiently. We demonstrate that LARS priors
improve VAE performance on several standard datasets both when they are learned
jointly with the rest of the model and when they are fitted to a pretrained
model. Finally, we show that LARS can be combined with existing methods for
defining flexible priors for an additional boost in performance
Variance Loss in Variational Autoencoders
In this article, we highlight what appears to be major issue of Variational
Autoencoders, evinced from an extensive experimentation with different network
architectures and datasets: the variance of generated data is significantly
lower than that of training data. Since generative models are usually evaluated
with metrics such as the Frechet Inception Distance (FID) that compare the
distributions of (features of) real versus generated images, the variance loss
typically results in degraded scores. This problem is particularly relevant in
a two stage setting, where we use a second VAE to sample in the latent space of
the first VAE. The minor variance creates a mismatch between the actual
distribution of latent variables and those generated by the second VAE, that
hinders the beneficial effects of the second stage. Renormalizing the output of
the second VAE towards the expected normal spherical distribution, we obtain a
sudden burst in the quality of generated samples, as also testified in terms of
FID.Comment: Article accepted at the Sixth International Conference on Machine
Learning, Optimization, and Data Science. July 19-23, 2020 - Certosa di
Pontignano, Siena, Ital
Recommended from our members
Advances in Probabilistic Modelling: Sparse Gaussian Processes, Autoencoders, and Few-shot Learning
Learning is the ability to generalise beyond training examples; but because many generalisations are consistent with a given set of observations, all machine learning methods rely on inductive biases to select certain generalisations over others. This thesis explores how the model structure
and priors affect the inductiven biases of probabilistic models, and our ability to learn and make inferences from data.
Specifically we present theoretical analyses alongside algorithmic and modelling advances in three areas of probabilistic machine learning: sparse Gaussian process approximations and invariant covariance functions, learning flexible priors for variational autoencoders, and probabilistic approaches for few-shot learning. As inference is rarely tractable, we discuss variational inference methods as a secondary theme.
First, we disentangle the theoretical properties and optimisation behaviour
of two widely used sparse Gaussian process approximations. We conclude that a variational free energy approximation is more principled and extensible and should be used in practice despite
potential optimisation difficulties. We then discuss how general symmetries and invariances can be integrated into Gaussian process priors and can be learned using the marginal likelihood. To make inference tractable, we develop a variational inference scheme that uses unbiased estimates of intractable covariance functions.
We then address the mismatch between aggregate posteriors and priors in variational autoencoders and propose a mechanism to define flexible distributions using a form of rejection sampling. We use this approach to define a more flexible prior distribution on the latent space of a variational autoencoder, which generalises to unseen test data and reduces the number of low quality samples from the model in a practical way.
Finally, we propose two probabilistic approaches to few-shot learning that achieve state of the art results on benchmarks, building on multi-task probabilistic models with adaptive classifier heads. Our first approach combines a pre-trained deep feature extractor with a simple probabilistic
model for the head, and can be linked to automatically regularised softmax regression. The second employs an amortised head model; it can be viewed to meta-learn probabilistic inference for prediction, and can be generalised to other contexts such as few-shot regression.UK Engineering and Physics Research Council (EPSRC) DTA, Qualcomm Studentship in Technology, Max Planck Societ
Accelerated Parallel Non-conjugate Sampling for Bayesian Non-parametric Models
Inference of latent feature models in the Bayesian nonparametric setting is
generally difficult, especially in high dimensional settings, because it
usually requires proposing features from some prior distribution. In special
cases, where the integration is tractable, we could sample new feature
assignments according to a predictive likelihood. However, this still may not
be efficient in high dimensions. We present a novel method to accelerate the
mixing of latent variable model inference by proposing feature locations from
the data, as opposed to the prior. First, we introduce our accelerated feature
proposal mechanism that we will show is a valid Bayesian inference algorithm
and next we propose an approximate inference strategy to perform accelerated
inference in parallel. This sampling method is efficient for proper mixing of
the Markov chain Monte Carlo sampler, computationally attractive, and is
theoretically guaranteed to converge to the posterior distribution as its
limiting distribution.Comment: Previously known as "Accelerated Inference for Latent Variable
Models
Variational Autoencoders with Riemannian Brownian Motion Priors
Variational Autoencoders (VAEs) represent the given data in a low-dimensional
latent space, which is generally assumed to be Euclidean. This assumption
naturally leads to the common choice of a standard Gaussian prior over
continuous latent variables. Recent work has, however, shown that this prior
has a detrimental effect on model capacity, leading to subpar performance. We
propose that the Euclidean assumption lies at the heart of this failure mode.
To counter this, we assume a Riemannian structure over the latent space, which
constitutes a more principled geometric view of the latent codes, and replace
the standard Gaussian prior with a Riemannian Brownian motion prior. We propose
an efficient inference scheme that does not rely on the unknown normalizing
factor of this prior. Finally, we demonstrate that this prior significantly
increases model capacity using only one additional scalar parameter.Comment: Published in ICML 202
- …