127 research outputs found

    Overdispersed Variational Autoencoders

    Get PDF
    The ability to fit complex generative probabilistic models to data is a key challenge in AI. Currently, variational methods are popular, but remain difficult to train due to high variance of the sampling methods employed. We introduce the overdispersed variational autoencoder and overdispersed importance weighted autoencoder, which combine overdispersed black box variational inference with the variational autoencoder and importance weighted autoencoder respectively. We use the log likelihood lower bounds and reparametrisation trick from the variational and importance weighted autoencoders, but rather than drawing samples from the variational distribution itself, we use importance sampling to draw samples from an overdispersed (i.e. heavier-tailed) proposal in the same family as the variational distribution. We run experiments on two different datasets, and show that this technique produces a lower variance estimate of the gradients, and reaches a higher bound on the log likelihood of the observed data
    • …
    corecore