2,780 research outputs found
Diffusion Variational Autoencoders
A standard Variational Autoencoder, with a Euclidean latent space, is
structurally incapable of capturing topological properties of certain datasets.
To remove topological obstructions, we introduce Diffusion Variational
Autoencoders with arbitrary manifolds as a latent space. A Diffusion
Variational Autoencoder uses transition kernels of Brownian motion on the
manifold. In particular, it uses properties of the Brownian motion to implement
the reparametrization trick and fast approximations to the KL divergence. We
show that the Diffusion Variational Autoencoder is capable of capturing
topological properties of synthetic datasets. Additionally, we train MNIST on
spheres, tori, projective spaces, SO(3), and a torus embedded in R3. Although a
natural dataset like MNIST does not have latent variables with a clear-cut
topological structure, training it on a manifold can still highlight
topological and geometrical properties.Comment: 10 pages, 8 figures Added an appendix with derivation of asymptotic
expansion of KL divergence for heat kernel on arbitrary Riemannian manifolds,
and an appendix with new experiments on binarized MNIST. Added a previously
missing factor in the asymptotic expansion of the heat kernel and corrected a
coefficient in asymptotic expansion KL divergence; further minor edit
- …