211 research outputs found
Adversarial Autoencoders with Constant-Curvature Latent Manifolds
Constant-curvature Riemannian manifolds (CCMs) have been shown to be ideal
embedding spaces in many application domains, as their non-Euclidean geometry
can naturally account for some relevant properties of data, like hierarchy and
circularity. In this work, we introduce the CCM adversarial autoencoder
(CCM-AAE), a probabilistic generative model trained to represent a data
distribution on a CCM. Our method works by matching the aggregated posterior of
the CCM-AAE with a probability distribution defined on a CCM, so that the
encoder implicitly learns to represent data on the CCM to fool the
discriminator network. The geometric constraint is also explicitly imposed by
jointly training the CCM-AAE to maximise the membership degree of the
embeddings to the CCM. While a few works in recent literature make use of
either hyperspherical or hyperbolic manifolds for different learning tasks,
ours is the first unified framework to seamlessly deal with CCMs of different
curvatures. We show the effectiveness of our model on three different datasets
characterised by non-trivial geometry: semi-supervised classification on MNIST,
link prediction on two popular citation datasets, and graph-based molecule
generation using the QM9 chemical database. Results show that our method
improves upon other autoencoders based on Euclidean and non-Euclidean
geometries on all tasks taken into account.Comment: Submitted to Applied Soft Computin
Change Detection in Graph Streams by Learning Graph Embeddings on Constant-Curvature Manifolds
The space of graphs is often characterised by a non-trivial geometry, which
complicates learning and inference in practical applications. A common approach
is to use embedding techniques to represent graphs as points in a conventional
Euclidean space, but non-Euclidean spaces have often been shown to be better
suited for embedding graphs. Among these, constant-curvature Riemannian
manifolds (CCMs) offer embedding spaces suitable for studying the statistical
properties of a graph distribution, as they provide ways to easily compute
metric geodesic distances. In this paper, we focus on the problem of detecting
changes in stationarity in a stream of attributed graphs. To this end, we
introduce a novel change detection framework based on neural networks and CCMs,
that takes into account the non-Euclidean nature of graphs. Our contribution in
this work is twofold. First, via a novel approach based on adversarial
learning, we compute graph embeddings by training an autoencoder to represent
graphs on CCMs. Second, we introduce two novel change detection tests operating
on CCMs. We perform experiments on synthetic data, as well as two real-world
application scenarios: the detection of epileptic seizures using functional
connectivity brain networks, and the detection of hostility between two
subjects, using human skeletal graphs. Results show that the proposed methods
are able to detect even small changes in a graph-generating process,
consistently outperforming approaches based on Euclidean embeddings.Comment: 14 pages, 8 figure
The Riemannian Geometry of Deep Generative Models
Deep generative models learn a mapping from a low dimensional latent space to
a high-dimensional data space. Under certain regularity conditions, these
models parameterize nonlinear manifolds in the data space. In this paper, we
investigate the Riemannian geometry of these generated manifolds. First, we
develop efficient algorithms for computing geodesic curves, which provide an
intrinsic notion of distance between points on the manifold. Second, we develop
an algorithm for parallel translation of a tangent vector along a path on the
manifold. We show how parallel translation can be used to generate analogies,
i.e., to transport a change in one data point into a semantically similar
change of another data point. Our experiments on real image data show that the
manifolds learned by deep generative models, while nonlinear, are surprisingly
close to zero curvature. The practical implication is that linear paths in the
latent space closely approximate geodesics on the generated manifold. However,
further investigation into this phenomenon is warranted, to identify if there
are other architectures or datasets where curvature plays a more prominent
role. We believe that exploring the Riemannian geometry of deep generative
models, using the tools developed in this paper, will be an important step in
understanding the high-dimensional, nonlinear spaces these models learn.Comment: 9 page
Fast Approximate Geodesics for Deep Generative Models
The length of the geodesic between two data points along a Riemannian
manifold, induced by a deep generative model, yields a principled measure of
similarity. Current approaches are limited to low-dimensional latent spaces,
due to the computational complexity of solving a non-convex optimisation
problem. We propose finding shortest paths in a finite graph of samples from
the aggregate approximate posterior, that can be solved exactly, at greatly
reduced runtime, and without a notable loss in quality. Our approach,
therefore, is hence applicable to high-dimensional problems, e.g., in the
visual domain. We validate our approach empirically on a series of experiments
using variational autoencoders applied to image data, including the Chair,
FashionMNIST, and human movement data sets.Comment: 28th International Conference on Artificial Neural Networks, 201
- …