1,343 research outputs found
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Bringing robotics taxonomies to continuous domains via GPLVM on hyperbolic manifolds
Robotic taxonomies have appeared as high-level hierarchical abstractions that
classify how humans move and interact with their environment. They have proven
useful to analyse grasps, manipulation skills, and whole-body support poses.
Despite the efforts devoted to design their hierarchy and underlying
categories, their use in application fields remains scarce. This may be
attributed to the lack of computational models that fill the gap between the
discrete hierarchical structure of the taxonomy and the high-dimensional
heterogeneous data associated to its categories. To overcome this problem, we
propose to model taxonomy data via hyperbolic embeddings that capture the
associated hierarchical structure. To do so, we formulate a Gaussian process
hyperbolic latent variable model and enforce the taxonomy structure through
graph-based priors on the latent space and distance-preserving back
constraints. We test our model on the whole-body support pose taxonomy to learn
hyperbolic embeddings that comply with the original graph structure. We show
that our model properly encodes unseen poses from existing or new taxonomy
categories, it can be used to generate trajectories between the embeddings, and
it outperforms its Euclidean counterparts
A Heat Diffusion Perspective on Geodesic Preserving Dimensionality Reduction
Diffusion-based manifold learning methods have proven useful in
representation learning and dimensionality reduction of modern high
dimensional, high throughput, noisy datasets. Such datasets are especially
present in fields like biology and physics. While it is thought that these
methods preserve underlying manifold structure of data by learning a proxy for
geodesic distances, no specific theoretical links have been established. Here,
we establish such a link via results in Riemannian geometry explicitly
connecting heat diffusion to manifold distances. In this process, we also
formulate a more general heat kernel based manifold embedding method that we
call heat geodesic embeddings. This novel perspective makes clearer the choices
available in manifold learning and denoising. Results show that our method
outperforms existing state of the art in preserving ground truth manifold
distances, and preserving cluster structure in toy datasets. We also showcase
our method on single cell RNA-sequencing datasets with both continuum and
cluster structure, where our method enables interpolation of withheld
timepoints of data. Finally, we show that parameters of our more general method
can be configured to give results similar to PHATE (a state-of-the-art
diffusion based manifold learning method) as well as SNE (an
attraction/repulsion neighborhood based method that forms the basis of t-SNE).Comment: 31 pages, 13 figures, 10 table
- …