10,335 research outputs found
Convexity-Increasing Morphs of Planar Graphs
We study the problem of convexifying drawings of planar graphs. Given any
planar straight-line drawing of an internally 3-connected graph, we show how to
morph the drawing to one with strictly convex faces while maintaining planarity
at all times. Our morph is convexity-increasing, meaning that once an angle is
convex, it remains convex. We give an efficient algorithm that constructs such
a morph as a composition of a linear number of steps where each step either
moves vertices along horizontal lines or moves vertices along vertical lines.
Moreover, we show that a linear number of steps is worst-case optimal.
To obtain our result, we use a well-known technique by Hong and Nagamochi for
finding redrawings with convex faces while preserving y-coordinates. Using a
variant of Tutte's graph drawing algorithm, we obtain a new proof of Hong and
Nagamochi's result which comes with a better running time. This is of
independent interest, as Hong and Nagamochi's technique serves as a building
block in existing morphing algorithms.Comment: Preliminary version in Proc. WG 201
A New Distributed Localization Method for Sensor Networks
This paper studies the problem of determining the sensor locations in a large
sensor network using relative distance (range) measurements only. Our work
follows from a seminal paper by Khan et al. [1] where a distributed algorithm,
known as DILOC, for sensor localization is given using the barycentric
coordinate. A main limitation of the DILOC algorithm is that all sensor nodes
must be inside the convex hull of the anchor nodes. In this paper, we consider
a general sensor network without the convex hull assumption, which incurs
challenges in determining the sign pattern of the barycentric coordinate. A
criterion is developed to address this issue based on available distance
measurements. Also, a new distributed algorithm is proposed to guarantee the
asymptotic localization of all localizable sensor nodes
Tropical Principal Component Analysis and its Application to Phylogenetics
Principal component analysis is a widely-used method for the dimensionality
reduction of a given data set in a high-dimensional Euclidean space. Here we
define and analyze two analogues of principal component analysis in the setting
of tropical geometry. In one approach, we study the Stiefel tropical linear
space of fixed dimension closest to the data points in the tropical projective
torus; in the other approach, we consider the tropical polytope with a fixed
number of vertices closest to the data points. We then give approximative
algorithms for both approaches and apply them to phylogenetics, testing the
methods on simulated phylogenetic data and on an empirical dataset of
Apicomplexa genomes.Comment: 28 page
Computing a Nonnegative Matrix Factorization -- Provably
In the Nonnegative Matrix Factorization (NMF) problem we are given an nonnegative matrix and an integer . Our goal is to express
as where and are nonnegative matrices of size
and respectively. In some applications, it makes sense to ask
instead for the product to approximate -- i.e. (approximately)
minimize \norm{M - AW}_F where \norm{}_F denotes the Frobenius norm; we
refer to this as Approximate NMF. This problem has a rich history spanning
quantum mechanics, probability theory, data analysis, polyhedral combinatorics,
communication complexity, demography, chemometrics, etc. In the past decade NMF
has become enormously popular in machine learning, where and are
computed using a variety of local search heuristics. Vavasis proved that this
problem is NP-complete. We initiate a study of when this problem is solvable in
polynomial time:
1. We give a polynomial-time algorithm for exact and approximate NMF for
every constant . Indeed NMF is most interesting in applications precisely
when is small.
2. We complement this with a hardness result, that if exact NMF can be solved
in time , 3-SAT has a sub-exponential time algorithm. This rules
out substantial improvements to the above algorithm.
3. We give an algorithm that runs in time polynomial in , and
under the separablity condition identified by Donoho and Stodden in 2003. The
algorithm may be practical since it is simple and noise tolerant (under benign
assumptions). Separability is believed to hold in many practical settings.
To the best of our knowledge, this last result is the first example of a
polynomial-time algorithm that provably works under a non-trivial condition on
the input and we believe that this will be an interesting and important
direction for future work.Comment: 29 pages, 3 figure
A Characterization Theorem and An Algorithm for A Convex Hull Problem
Given and , testing if , the convex hull of , is a fundamental
problem in computational geometry and linear programming. First, we prove a
Euclidean {\it distance duality}, distinct from classical separation theorems
such as Farkas Lemma: lies in if and only if for each there exists a {\it pivot}, satisfying . Equivalently, if and only if there exists a
{\it witness}, whose Voronoi cell relative to contains
. A witness separates from and approximate to
within a factor of two. Next, we describe the {\it Triangle Algorithm}: given
, an {\it iterate}, , and , if
, it stops. Otherwise, if there exists a pivot
, it replace with and with the projection of onto the
line . Repeating this process, the algorithm terminates in arithmetic operations, where
is the {\it visibility factor}, a constant satisfying and
, over all iterates . Additionally,
(i) we prove a {\it strict distance duality} and a related minimax theorem,
resulting in more effective pivots; (ii) describe -time algorithms that may compute a witness or a good
approximate solution; (iii) prove {\it generalized distance duality} and
describe a corresponding generalized Triangle Algorithm; (iv) prove a {\it
sensitivity theorem} to analyze the complexity of solving LP feasibility via
the Triangle Algorithm. The Triangle Algorithm is practical and competitive
with the simplex method, sparse greedy approximation and first-order methods.Comment: 42 pages, 17 figures, 2 tables. This revision only corrects minor
typo
An Exponential Lower Bound on the Complexity of Regularization Paths
For a variety of regularized optimization problems in machine learning,
algorithms computing the entire solution path have been developed recently.
Most of these methods are quadratic programs that are parameterized by a single
parameter, as for example the Support Vector Machine (SVM). Solution path
algorithms do not only compute the solution for one particular value of the
regularization parameter but the entire path of solutions, making the selection
of an optimal parameter much easier.
It has been assumed that these piecewise linear solution paths have only
linear complexity, i.e. linearly many bends. We prove that for the support
vector machine this complexity can be exponential in the number of training
points in the worst case. More strongly, we construct a single instance of n
input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) =
\Theta(2^d) many distinct subsets of support vectors occur as the
regularization parameter changes.Comment: Journal version, 28 Pages, 5 Figure
Optimally fast incremental Manhattan plane embedding and planar tight span construction
We describe a data structure, a rectangular complex, that can be used to
represent hyperconvex metric spaces that have the same topology (although not
necessarily the same distance function) as subsets of the plane. We show how to
use this data structure to construct the tight span of a metric space given as
an n x n distance matrix, when the tight span is homeomorphic to a subset of
the plane, in time O(n^2), and to add a single point to a planar tight span in
time O(n). As an application of this construction, we show how to test whether
a given finite metric space embeds isometrically into the Manhattan plane in
time O(n^2), and add a single point to the space and re-test whether it has
such an embedding in time O(n).Comment: 39 pages, 15 figure
- …