9,809 research outputs found
Deep SimNets
We present a deep layered architecture that generalizes convolutional neural
networks (ConvNets). The architecture, called SimNets, is driven by two
operators: (i) a similarity function that generalizes inner-product, and (ii) a
log-mean-exp function called MEX that generalizes maximum and average. The two
operators applied in succession give rise to a standard neuron but in "feature
space". The feature spaces realized by SimNets depend on the choice of the
similarity operator. The simplest setting, which corresponds to a convolution,
realizes the feature space of the Exponential kernel, while other settings
realize feature spaces of more powerful kernels (Generalized Gaussian, which
includes as special cases RBF and Laplacian), or even dynamically learned
feature spaces (Generalized Multiple Kernel Learning). As a result, the SimNet
contains a higher abstraction level compared to a traditional ConvNet. We argue
that enhanced expressiveness is important when the networks are small due to
run-time constraints (such as those imposed by mobile applications). Empirical
evaluation validates the superior expressiveness of SimNets, showing a
significant gain in accuracy over ConvNets when computational resources at
run-time are limited. We also show that in large-scale settings, where
computational complexity is less of a concern, the additional capacity of
SimNets can be controlled with proper regularization, yielding accuracies
comparable to state of the art ConvNets
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations
The success of deep convolutional architectures is often attributed in part
to their ability to learn multiscale and invariant representations of natural
signals. However, a precise study of these properties and how they affect
learning guarantees is still missing. In this paper, we consider deep
convolutional representations of signals; we study their invariance to
translations and to more general groups of transformations, their stability to
the action of diffeomorphisms, and their ability to preserve signal
information. This analysis is carried by introducing a multilayer kernel based
on convolutional kernel networks and by studying the geometry induced by the
kernel mapping. We then characterize the corresponding reproducing kernel
Hilbert space (RKHS), showing that it contains a large class of convolutional
neural networks with homogeneous activation functions. This analysis allows us
to separate data representation from learning, and to provide a canonical
measure of model complexity, the RKHS norm, which controls both stability and
generalization of any learned model. In addition to models in the constructed
RKHS, our stability analysis also applies to convolutional networks with
generic activations such as rectified linear units, and we discuss its
relationship with recent generalization bounds based on spectral norms
Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds
The Euclidean scattering transform was introduced nearly a decade ago to
improve the mathematical understanding of convolutional neural networks.
Inspired by recent interest in geometric deep learning, which aims to
generalize convolutional neural networks to manifold and graph-structured
domains, we define a geometric scattering transform on manifolds. Similar to
the Euclidean scattering transform, the geometric scattering transform is based
on a cascade of wavelet filters and pointwise nonlinearities. It is invariant
to local isometries and stable to certain types of diffeomorphisms. Empirical
results demonstrate its utility on several geometric learning tasks. Our
results generalize the deformation stability and local translation invariance
of Euclidean scattering, and demonstrate the importance of linking the used
filter structures to the underlying geometry of the data.Comment: 35 pages; 3 figures; 2 tables; v3: Revisions based on reviewer
comment
Surface Networks
We study data-driven representations for three-dimensional triangle meshes,
which are one of the prevalent objects used to represent 3D geometry. Recent
works have developed models that exploit the intrinsic geometry of manifolds
and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants,
which learn from the local metric tensor via the Laplacian operator. Despite
offering excellent sample complexity and built-in invariances, intrinsic
geometry alone is invariant to isometric deformations, making it unsuitable for
many applications. To overcome this limitation, we propose several upgrades to
GNNs to leverage extrinsic differential geometry properties of
three-dimensional surfaces, increasing its modeling power.
In particular, we propose to exploit the Dirac operator, whose spectrum
detects principal curvature directions --- this is in stark contrast with the
classical Laplace operator, which directly measures mean curvature. We coin the
resulting models \emph{Surface Networks (SN)}. We prove that these models
define shape representations that are stable to deformation and to
discretization, and we demonstrate the efficiency and versatility of SNs on two
challenging tasks: temporal prediction of mesh deformations under non-linear
dynamics and generative models using a variational autoencoder framework with
encoders/decoders given by SNs
- …