308 research outputs found
Optimally fast incremental Manhattan plane embedding and planar tight span construction
We describe a data structure, a rectangular complex, that can be used to
represent hyperconvex metric spaces that have the same topology (although not
necessarily the same distance function) as subsets of the plane. We show how to
use this data structure to construct the tight span of a metric space given as
an n x n distance matrix, when the tight span is homeomorphic to a subset of
the plane, in time O(n^2), and to add a single point to a planar tight span in
time O(n). As an application of this construction, we show how to test whether
a given finite metric space embeds isometrically into the Manhattan plane in
time O(n^2), and add a single point to the space and re-test whether it has
such an embedding in time O(n).Comment: 39 pages, 15 figure
Geometric deep learning: going beyond Euclidean data
Many scientific fields study data with an underlying structure that is a
non-Euclidean space. Some examples include social networks in computational
social sciences, sensor networks in communications, functional networks in
brain imaging, regulatory networks in genetics, and meshed surfaces in computer
graphics. In many applications, such geometric data are large and complex (in
the case of social networks, on the scale of billions), and are natural targets
for machine learning techniques. In particular, we would like to use deep
neural networks, which have recently proven to be powerful tools for a broad
range of problems from computer vision, natural language processing, and audio
analysis. However, these tools have been most successful on data with an
underlying Euclidean or grid-like structure, and in cases where the invariances
of these structures are built into networks used to model them. Geometric deep
learning is an umbrella term for emerging techniques attempting to generalize
(structured) deep neural models to non-Euclidean domains such as graphs and
manifolds. The purpose of this paper is to overview different examples of
geometric deep learning problems and present available solutions, key
difficulties, applications, and future research directions in this nascent
field
Steinitz Theorems for Orthogonal Polyhedra
We define a simple orthogonal polyhedron to be a three-dimensional polyhedron
with the topology of a sphere in which three mutually-perpendicular edges meet
at each vertex. By analogy to Steinitz's theorem characterizing the graphs of
convex polyhedra, we find graph-theoretic characterizations of three classes of
simple orthogonal polyhedra: corner polyhedra, which can be drawn by isometric
projection in the plane with only one hidden vertex, xyz polyhedra, in which
each axis-parallel line through a vertex contains exactly one other vertex, and
arbitrary simple orthogonal polyhedra. In particular, the graphs of xyz
polyhedra are exactly the bipartite cubic polyhedral graphs, and every
bipartite cubic polyhedral graph with a 4-connected dual graph is the graph of
a corner polyhedron. Based on our characterizations we find efficient
algorithms for constructing orthogonal polyhedra from their graphs.Comment: 48 pages, 31 figure
Manifold Learning for Natural Image Sets, Doctoral Dissertation August 2006
The field of manifold learning provides powerful tools for parameterizing high-dimensional data points with a small number of parameters when this data lies on or near some manifold. Images can be thought of as points in some high-dimensional image space where each coordinate represents the intensity value of a single pixel. These manifold learning techniques have been successfully applied to simple image sets, such as handwriting data and a statue in a tightly controlled environment. However, they fail in the case of natural image sets, even those that only vary due to a single degree of freedom, such as a person walking or a heart beating. Parameterizing data sets such as these will allow for additional constraints on traditional computer vision problems such as segmentation and tracking. This dissertation explores the reasons why classical manifold learning algorithms fail on natural image sets and proposes new algorithms for parameterizing this type of data
- …