100 research outputs found

    HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network

    Full text link
    We propose a framework called HyperVAE for encoding distributions of distributions. When a target distribution is modeled by a VAE, its neural network parameters \theta is drawn from a distribution p(\theta) which is modeled by a hyper-level VAE. We propose a variational inference using Gaussian mixture models to implicitly encode the parameters \theta into a low dimensional Gaussian distribution. Given a target distribution, we predict the posterior distribution of the latent code, then use a matrix-network decoder to generate a posterior distribution q(\theta). HyperVAE can encode the parameters \theta in full in contrast to common hyper-networks practices, which generate only the scale and bias vectors as target-network parameters. Thus HyperVAE preserves much more information about the model for each task in the latent space. We discuss HyperVAE using the minimum description length (MDL) principle and show that it helps HyperVAE to generalize. We evaluate HyperVAE in density estimation tasks, outlier detection and discovery of novel design classes, demonstrating its efficacy

    Recognising faces in unseen modes: a tensor based approach

    Full text link
    This paper addresses the limitation of current multilinear techniques (multilinear PCA, multilinear ICA) when applied to face recognition for handling faces in unseen illumination and viewpoints. We propose a new recognition method, exploiting the interaction of all the subspaces resulting from multilinear decomposition (for both multilinear PCA and ICA), to produce a new basis called multilinear-eigenmodes. This basis offers the flexibility to handle face images at unseen illumination or viewpoints. Experiments on benchmarked datasets yield superior performance in terms of both accuracy and computational cost

    EMOTE: An Explainable architecture for Modelling the Other Through Empathy

    Full text link
    We can usually assume others have goals analogous to our own. This assumption can also, at times, be applied to multi-agent games - e.g. Agent 1's attraction to green pellets is analogous to Agent 2's attraction to red pellets. This "analogy" assumption is tied closely to the cognitive process known as empathy. Inspired by empathy, we design a simple and explainable architecture to model another agent's action-value function. This involves learning an "Imagination Network" to transform the other agent's observed state in order to produce a human-interpretable "empathetic state" which, when presented to the learning agent, produces behaviours that mimic the other agent. Our approach is applicable to multi-agent scenarios consisting of a single learning agent and other (independent) agents acting according to fixed policies. This architecture is particularly beneficial for (but not limited to) algorithms using a composite value or reward function. We show our method produces better performance in multi-agent games, where it robustly estimates the other's model in different environment configurations. Additionally, we show that the empathetic states are human interpretable, and thus verifiable
    • …
    corecore