102 research outputs found
Coordinate Independent Convolutional Networks -- Isometry and Gauge Equivariant Convolutions on Riemannian Manifolds
Motivated by the vast success of deep convolutional networks, there is a
great interest in generalizing convolutions to non-Euclidean manifolds. A major
complication in comparison to flat spaces is that it is unclear in which
alignment a convolution kernel should be applied on a manifold. The underlying
reason for this ambiguity is that general manifolds do not come with a
canonical choice of reference frames (gauge). Kernels and features therefore
have to be expressed relative to arbitrary coordinates. We argue that the
particular choice of coordinatization should not affect a network's inference
-- it should be coordinate independent. A simultaneous demand for coordinate
independence and weight sharing is shown to result in a requirement on the
network to be equivariant under local gauge transformations (changes of local
reference frames). The ambiguity of reference frames depends thereby on the
G-structure of the manifold, such that the necessary level of gauge
equivariance is prescribed by the corresponding structure group G. Coordinate
independent convolutions are proven to be equivariant w.r.t. those isometries
that are symmetries of the G-structure. The resulting theory is formulated in a
coordinate free fashion in terms of fiber bundles. To exemplify the design of
coordinate independent convolutions, we implement a convolutional network on
the M\"obius strip. The generality of our differential geometric formulation of
convolutional networks is demonstrated by an extensive literature review which
explains a large number of Euclidean CNNs, spherical CNNs and CNNs on general
surfaces as specific instances of coordinate independent convolutions.Comment: The implementation of orientation independent M\"obius convolutions
is publicly available at https://github.com/mauriceweiler/MobiusCNN
On the Sample Complexity of Predictive Sparse Coding
The goal of predictive sparse coding is to learn a representation of examples
as sparse linear combinations of elements from a dictionary, such that a
learned hypothesis linear in the new representation performs well on a
predictive task. Predictive sparse coding algorithms recently have demonstrated
impressive performance on a variety of supervised tasks, but their
generalization properties have not been studied. We establish the first
generalization error bounds for predictive sparse coding, covering two
settings: 1) the overcomplete setting, where the number of features k exceeds
the original dimensionality d; and 2) the high or infinite-dimensional setting,
where only dimension-free bounds are useful. Both learning bounds intimately
depend on stability properties of the learned sparse encoder, as measured on
the training sample. Consequently, we first present a fundamental stability
result for the LASSO, a result characterizing the stability of the sparse codes
with respect to perturbations to the dictionary. In the overcomplete setting,
we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with
respect to d and k. In the high or infinite-dimensional setting, we show a
dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and
s, where s is an upper bound on the number of non-zeros in the sparse code for
any training data point.Comment: Sparse Coding Stability Theorem from version 1 has been relaxed
considerably using a new notion of coding margin. Old Sparse Coding Stability
Theorem still in new version, now as Theorem 2. Presentation of all proofs
simplified/improved considerably. Paper reorganized. Empirical analysis
showing new coding margin is non-trivial on real dataset
Calculating Sparse and Dense Correspondences for Near-Isometric Shapes
Comparing and analysing digital models are basic techniques of geometric shape processing. These techniques have a variety of applications, such as extracting the domain knowledge contained in the growing number of digital models to simplify shape modelling. Another example application is the analysis of real-world objects, which itself has a variety of applications, such as medical examinations, medical and agricultural research, and infrastructure maintenance. As methods to digitalize physical objects mature, any advances in the analysis of digital shapes lead to progress in the analysis of real-world objects. Global shape properties, like volume and surface area, are simple to compare but contain only very limited information. Much more information is contained in local shape differences, such as where and how a plant grew. Sadly the computation of local shape differences is hard as it requires knowledge of corresponding point pairs, i.e. points on both shapes that correspond to each other. The following article thesis (cumulative dissertation) discusses several recent publications for the computation of corresponding points: - Geodesic distances between points, i.e. distances along the surface, are fundamental for several shape processing tasks as well as several shape matching techniques. Chapter 3 introduces and analyses fast and accurate bounds on geodesic distances. - When building a shape space on a set of shapes, misaligned correspondences lead to points moving along the surfaces and finally to a larger shape space. Chapter 4 shows that this also works the other way around, that is good correspondences are obtain by optimizing them to generate a compact shape space. - Representing correspondences with a “functional map” has a variety of advantages. Chapter 5 shows that representing the correspondence map as an alignment of Green’s functions of the Laplace operator has similar advantages, but is much less dependent on the number of eigenvectors used for the computations. - Quadratic assignment problems were recently shown to reliably yield sparse correspondences. Chapter 6 compares state-of-the-art convex relaxations of graphics and vision with methods from discrete optimization on typical quadratic assignment problems emerging in shape matching
Learning Neural Graph Representations in Non-Euclidean Geometries
The success of Deep Learning methods is heavily dependent on the choice of the data representation. For that reason, much of the actual effort goes into Representation Learning, which seeks to design preprocessing pipelines and data transformations that can support effective learning algorithms. The aim of Representation Learning is to facilitate the task of extracting useful information for classifiers and other predictor models. In this regard, graphs arise as a convenient data structure that serves as an intermediary representation in a wide range of problems. The predominant approach to work with graphs has been to embed them in an Euclidean space, due to the power and simplicity of this geometry. Nevertheless, data in many domains exhibit non-Euclidean features, making embeddings into Riemannian manifolds with a richer structure necessary. The choice of a metric space where to embed the data imposes a geometric inductive bias, with a direct impact on the performance of the models.
This thesis is about learning neural graph representations in non-Euclidean geometries and showcasing their applicability in different downstream tasks. We introduce a toolkit formed by different graph metrics with the goal of characterizing the topology of the data. In that way, we can choose a suitable target embedding space aligned to the shape of the dataset. By virtue of the geometric inductive bias provided by the structure of the non-Euclidean manifolds, neural models can achieve higher performances with a reduced parameter footprint.
As a first step, we study graphs with hierarchical structures. We develop different techniques to derive hierarchical graphs from large label inventories. Noticing the capacity of hyperbolic spaces to represent tree-like arrangements, we incorporate this information into an NLP model through hyperbolic graph embeddings and showcase the higher performance that they enable.
Second, we tackle the question of how to learn hierarchical representations suited for different downstream tasks. We introduce a model that jointly learns task-specific graph embeddings from a label inventory and performs classification in hyperbolic space. The model achieves state-of-the-art results on very fine-grained labels, with a remarkable reduction of the parameter size.
Next, we move to matrix manifolds to work on graphs with diverse structures and properties. We propose a general framework to implement the mathematical tools required to learn graph embeddings on symmetric spaces. These spaces are of particular interest given that they have a compound geometry that simultaneously contains Euclidean as well as hyperbolic subspaces, allowing them to automatically adapt to dissimilar features in the graph. We demonstrate a concrete implementation of the framework on Siegel spaces, showcasing their versatility on different tasks.
Finally, we focus on multi-relational graphs. We devise the means to translate Euclidean and hyperbolic multi-relational graph embedding models into the space of symmetric positive definite (SPD) matrices. To do so we develop gyrocalculus in this geometry and integrate it with the aforementioned framework
An introduction to quantum gravity
After an overview of the physical motivations for studying quantum gravity,
we reprint THE FORMAL STRUCTURE OF QUANTUM GRAVITY, i.e. the 1978 Cargese
Lectures by Professor B.S. DeWitt, with kind permission of Springer. The reader
is therefore introduced, in a pedagogical way, to the functional integral
quantization of gravitation and Yang-Mills theory. It is hoped that such a
paper will remain useful for all lecturers or Ph.D. students who face the task
of introducing (resp. learning) some basic concepts in quantum gravity in a
relatively short time. In the second part, we outline selected topics such as
the braneworld picture with the same covariant formalism of the first part, and
spectral asymptotics of Euclidean quantum gravity with diffeomorphism-invariant
boundary conditions. The latter might have implications for singularity
avoidance in quantum cosmology.Comment: 68 pages, Latex file. Sections from 2 to 17 are published thanks to
kind permission of Springe
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
- …