52 research outputs found
VAE with a VampPrior
Many different methods to train deep generative models have been introduced
in the past. In this paper, we propose to extend the variational auto-encoder
(VAE) framework with a new type of prior which we call "Variational Mixture of
Posteriors" prior, or VampPrior for short. The VampPrior consists of a mixture
distribution (e.g., a mixture of Gaussians) with components given by
variational posteriors conditioned on learnable pseudo-inputs. We further
extend this prior to a two layer hierarchical model and show that this
architecture with a coupled prior and posterior, learns significantly better
models. The model also avoids the usual local optima issues related to useless
latent dimensions that plague VAEs. We provide empirical studies on six
datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes,
Frey Faces and Histopathology patches, and show that applying the hierarchical
VampPrior delivers state-of-the-art results on all datasets in the unsupervised
permutation invariant setting and the best results or comparable to SOTA
methods for the approach with convolutional networks.Comment: 16 pages, final version, AISTATS 201
Self-Supervised Variational Auto-Encoders
Density estimation, compression and data generation are crucial tasks in
artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single
framework to achieve these goals. Here, we present a novel class of generative
models, called self-supervised Variational Auto-Encoder (selfVAE), that
utilizes deterministic and discrete variational posteriors. This class of
models allows to perform both conditional and unconditional sampling, while
simplifying the objective function. First, we use a single self-supervised
transformation as a latent variable, where a transformation is either
downscaling or edge detection. Next, we consider a hierarchical architecture,
i.e., multiple transformations, and we show its benefits compared to the VAE.
The flexibility of selfVAE in data reconstruction finds a particularly
interesting use case in data compression tasks, where we can trade-off memory
for better data quality, and vice-versa. We present performance of our approach
on three benchmark image data (Cifar10, Imagenette64, and CelebA).Comment: 19 pages, 14 figures, 2 table
Differential Evolution with Reversible Linear Transformations
Differential evolution (DE) is a well-known type of evolutionary algorithms (EA). Similarly to other EA variants it can suffer from small populations and loose diversity too quickly. This paper presents a new approach to mitigate this issue: We propose to generate new candidate solutions by utilizing reversible linear transformations applied to a triplet of solutions from the population. In other words, the population is enlarged by using newly generated individuals without evaluating their fitness. We assess our methods on three problems: (i) benchmark function optimization, (ii) discovering parameter values of the gene repressilator system, (iii) learning neural networks. The empirical results indicate that the proposed approach outperforms vanilla DE and a version of DE with applying differential mutation three times on all testbeds
- …