4,264 research outputs found

    Variational auto-encoded deep Gaussian processes

    Get PDF
    We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization

    DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding

    Full text link
    Human face exhibits an inherent hierarchy in its representations (i.e., holistic facial expressions can be encoded via a set of facial action units (AUs) and their intensity). Variational (deep) auto-encoders (VAE) have shown great results in unsupervised extraction of hierarchical latent representations from large amounts of image data, while being robust to noise and other undesired artifacts. Potentially, this makes VAEs a suitable approach for learning facial features for AU intensity estimation. Yet, most existing VAE-based methods apply classifiers learned separately from the encoded features. By contrast, the non-parametric (probabilistic) approaches, such as Gaussian Processes (GPs), typically outperform their parametric counterparts, but cannot deal easily with large amounts of data. To this end, we propose a novel VAE semi-parametric modeling framework, named DeepCoder, which combines the modeling power of parametric (convolutional) and nonparametric (ordinal GPs) VAEs, for joint learning of (1) latent representations at multiple levels in a task hierarchy1, and (2) classification of multiple ordinal outputs. We show on benchmark datasets for AU intensity estimation that the proposed DeepCoder outperforms the state-of-the-art approaches, and related VAEs and deep learning models.Comment: ICCV 2017 - accepte

    Event Generation and Statistical Sampling for Physics with Deep Generative Models and a Density Information Buffer

    Get PDF
    We present a study for the generation of events from a physical process with deep generative models. The simulation of physical processes requires not only the production of physical events, but also to ensure these events occur with the correct frequencies. We investigate the feasibility of learning the event generation and the frequency of occurrence with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) to produce events like Monte Carlo generators. We study three processes: a simple two-body decay, the processes e+e−→Z→l+l−e^+e^-\to Z \to l^+l^- and pp→ttˉp p \to t\bar{t} including the decay of the top quarks and a simulation of the detector response. We find that the tested GAN architectures and the standard VAE are not able to learn the distributions precisely. By buffering density information of encoded Monte Carlo events given the encoder of a VAE we are able to construct a prior for the sampling of new events from the decoder that yields distributions that are in very good agreement with real Monte Carlo events and are generated several orders of magnitude faster. Applications of this work include generic density estimation and sampling, targeted event generation via a principal component analysis of encoded ground truth data, anomaly detection and more efficient importance sampling, e.g. for the phase space integration of matrix elements in quantum field theories.Comment: 24 pages, 10 figure
    • …
    corecore