3,540 research outputs found
Hierarchical Implicit Models and Likelihood-Free Variational Inference
Implicit probabilistic models are a flexible class of models defined by a
simulation process for data. They form the basis for theories which encompass
our understanding of the physical world. Despite this fundamental nature, the
use of implicit models remains limited due to challenges in specifying complex
latent structure in them, and in performing inferences in such models with
large data sets. In this paper, we first introduce hierarchical implicit models
(HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian
modeling, thereby defining models via simulators of data with rich hidden
structure. Next, we develop likelihood-free variational inference (LFVI), a
scalable variational inference algorithm for HIMs. Key to LFVI is specifying a
variational family that is also implicit. This matches the model's flexibility
and allows for accurate approximation of the posterior. We demonstrate diverse
applications: a large-scale physical simulator for predator-prey populations in
ecology; a Bayesian generative adversarial network for discrete data; and a
deep implicit model for text generation.Comment: Appears in Neural Information Processing Systems, 201
Grammar Variational Autoencoder
Deep generative models have been wildly successful at learning coherent
latent representations for continuous data such as video and audio. However,
generative modeling of discrete data such as arithmetic expressions and
molecular structures still poses significant challenges. Crucially,
state-of-the-art methods often produce outputs that are not valid. We make the
key observation that frequently, discrete data can be represented as a parse
tree from a context-free grammar. We propose a variational autoencoder which
encodes and decodes directly to and from these parse trees, ensuring the
generated outputs are always valid. Surprisingly, we show that not only does
our model more often generate valid outputs, it also learns a more coherent
latent space in which nearby points decode to similar discrete outputs. We
demonstrate the effectiveness of our learned models by showing their improved
performance in Bayesian optimization for symbolic regression and molecular
synthesis
Grammar induction for mildly context sensitive languages using variational Bayesian inference
The following technical report presents a formal approach to probabilistic
minimalist grammar induction. We describe a formalization of a minimalist
grammar. Based on this grammar, we define a generative model for minimalist
derivations. We then present a generalized algorithm for the application of
variational Bayesian inference to lexicalized mildly context sensitive language
grammars which in this paper is applied to the previously defined minimalist
grammar
- …