3 research outputs found

    Fast Approximate Geodesics for Deep Generative Models

    Full text link
    The length of the geodesic between two data points along a Riemannian manifold, induced by a deep generative model, yields a principled measure of similarity. Current approaches are limited to low-dimensional latent spaces, due to the computational complexity of solving a non-convex optimisation problem. We propose finding shortest paths in a finite graph of samples from the aggregate approximate posterior, that can be solved exactly, at greatly reduced runtime, and without a notable loss in quality. Our approach, therefore, is hence applicable to high-dimensional problems, e.g., in the visual domain. We validate our approach empirically on a series of experiments using variational autoencoders applied to image data, including the Chair, FashionMNIST, and human movement data sets.Comment: 28th International Conference on Artificial Neural Networks, 201

    Learning Sequence Neighbourhood Metrics

    Get PDF
    Storing short descriptors of sequential data has several benefits. First, they typically require much less memory and thus make processing of large data sets much more efficient. Second, if the descriptors are formed as vectors, e.g. x 2 Rn, numerous algorithms tailored towards static data can be applied. Instead of applying static data algorithms to dynamic data, we propose to learn a mapping from sequential data to static data first. This can be done by combining recurrent neural networks (RNNs), a pooling operation and any differentiable objective function for static data. In this work, we present how neigbourhood components analysis (NCA) (Goldberger et al. 2004) can be used to learn meaningful representations which lead to excellent classification results and visualizations on a speech dataset
    corecore