5,358 research outputs found
Exact Computation of a Manifold Metric, via Lipschitz Embeddings and Shortest Paths on a Graph
Data-sensitive metrics adapt distances locally based the density of data
points with the goal of aligning distances and some notion of similarity. In
this paper, we give the first exact algorithm for computing a data-sensitive
metric called the nearest neighbor metric. In fact, we prove the surprising
result that a previously published -approximation is an exact algorithm.
The nearest neighbor metric can be viewed as a special case of a
density-based distance used in machine learning, or it can be seen as an
example of a manifold metric. Previous computational research on such metrics
despaired of computing exact distances on account of the apparent difficulty of
minimizing over all continuous paths between a pair of points. We leverage the
exact computation of the nearest neighbor metric to compute sparse spanners and
persistent homology. We also explore the behavior of the metric built from
point sets drawn from an underlying distribution and consider the more general
case of inputs that are finite collections of path-connected compact sets.
The main results connect several classical theories such as the conformal
change of Riemannian metrics, the theory of positive definite functions of
Schoenberg, and screw function theory of Schoenberg and Von Neumann. We develop
novel proof techniques based on the combination of screw functions and
Lipschitz extensions that may be of independent interest.Comment: 15 page
Slimness of graphs
Slimness of a graph measures the local deviation of its metric from a tree
metric. In a graph , a geodesic triangle with
is the union of three shortest
paths connecting these vertices. A geodesic triangle is
called -slim if for any vertex on any side the
distance from to is at most , i.e. each path
is contained in the union of the -neighborhoods of two others. A graph
is called -slim, if all geodesic triangles in are
-slim. The smallest value for which is -slim is
called the slimness of . In this paper, using the layering partition
technique, we obtain sharp bounds on slimness of such families of graphs as (1)
graphs with cluster-diameter of a layering partition of , (2)
graphs with tree-length , (3) graphs with tree-breadth , (4)
-chordal graphs, AT-free graphs and HHD-free graphs. Additionally, we show
that the slimness of every 4-chordal graph is at most 2 and characterize those
4-chordal graphs for which the slimness of every of its induced subgraph is at
most 1
Fast domino tileability
Domino tileability is a classical problem in Discrete Geometry, famously
solved by Thurston for simply connected regions in nearly linear time in the
area. In this paper, we improve upon Thurston's height function approach to a
nearly linear time in the perimeter.Comment: Appeared in Discrete Comput. Geom. 56 (2016), 377-39
SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels
We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant
of deep neural networks for irregular structured and geometric input, e.g.,
graphs or meshes. Our main contribution is a novel convolution operator based
on B-splines, that makes the computation time independent from the kernel size
due to the local support property of the B-spline basis functions. As a result,
we obtain a generalization of the traditional CNN convolution operator by using
continuous kernel functions parametrized by a fixed number of trainable
weights. In contrast to related approaches that filter in the spectral domain,
the proposed method aggregates features purely in the spatial domain. In
addition, SplineCNN allows entire end-to-end training of deep architectures,
using only the geometric structure as input, instead of handcrafted feature
descriptors. For validation, we apply our method on tasks from the fields of
image graph classification, shape correspondence and graph node classification,
and show that it outperforms or pars state-of-the-art approaches while being
significantly faster and having favorable properties like domain-independence.Comment: Presented at CVPR 201
Clustering of spectra and fractals of regular graphs
We exhibit a characteristic structure of the class of all regular graphs of
degree d that stems from the spectra of their adjacency matrices. The structure
has a fractal threadlike appearance. Points with coordinates given by the mean
and variance of the exponentials of graph eigenvalues cluster around a line
segment that we call a filar. Zooming-in reveals that this cluster splits into
smaller segments (filars) labeled by the number of triangles in graphs. Further
zooming-in shows that the smaller filars split into subfilars labelled by the
number of quadrangles in graphs, etc. We call this fractal structure,
discovered in a numerical experiment, a multifilar structure. We also provide a
mathematical explanation of this phenomenon based on the Ihara-Selberg trace
formula, and compute the coordinates and slopes of all filars in terms of
Bessel functions of the first kind.Comment: 10 pages, 5 figure
Multi-directional Geodesic Neural Networks via Equivariant Convolution
We propose a novel approach for performing convolution of signals on curved
surfaces and show its utility in a variety of geometric deep learning
applications. Key to our construction is the notion of directional functions
defined on the surface, which extend the classic real-valued signals and which
can be naturally convolved with with real-valued template functions. As a
result, rather than trying to fix a canonical orientation or only keeping the
maximal response across all alignments of a 2D template at every point of the
surface, as done in previous works, we show how information across all
rotations can be kept across different layers of the neural network. Our
construction, which we call multi-directional geodesic convolution, or
directional convolution for short, allows, in particular, to propagate and
relate directional information across layers and thus different regions on the
shape. We first define directional convolution in the continuous setting, prove
its key properties and then show how it can be implemented in practice, for
shapes represented as triangle meshes. We evaluate directional convolution in a
wide variety of learning scenarios ranging from classification of signals on
surfaces, to shape segmentation and shape matching, where we show a significant
improvement over several baselines
- …