1,620 research outputs found
Optimal Area-Sensitive Bounds for Polytope Approximation
Approximating convex bodies is a fundamental question in geometry and has a
wide variety of applications. Given a convex body of diameter in
for fixed , the objective is to minimize the number of
vertices (alternatively, the number of facets) of an approximating polytope for
a given Hausdorff error . The best known uniform bound, due to
Dudley (1974), shows that facets suffice.
While this bound is optimal in the case of a Euclidean ball, it is far from
optimal for ``skinny'' convex bodies.
A natural way to characterize a convex object's skinniness is in terms of its
relationship to the Euclidean ball. Given a convex body , define its surface
diameter to be the diameter of a Euclidean ball of the same
surface area as . It follows from generalizations of the isoperimetric
inequality that .
We show that, under the assumption that the width of the body in any
direction is at least , it is possible to approximate a convex
body using facets. This bound is
never worse than the previous bound and may be significantly better for skinny
bodies. The bound is tight, in the sense that for any value of ,
there exist convex bodies that, up to constant factors, require this many
facets.
The improvement arises from a novel approach to sampling points on the
boundary of a convex body. We employ a classical concept from convexity, called
Macbeath regions. We demonstrate that Macbeath regions in and 's polar
behave much like polar pairs. We then apply known results on the Mahler volume
to bound their number
Practical and Optimal LSH for Angular Distance
We show the existence of a Locality-Sensitive Hashing (LSH) family for the
angular distance that yields an approximate Near Neighbor Search algorithm with
the asymptotically optimal running time exponent. Unlike earlier algorithms
with this property (e.g., Spherical LSH [Andoni, Indyk, Nguyen, Razenshteyn
2014], [Andoni, Razenshteyn 2015]), our algorithm is also practical, improving
upon the well-studied hyperplane LSH [Charikar, 2002] in practice. We also
introduce a multiprobe version of this algorithm, and conduct experimental
evaluation on real and synthetic data sets.
We complement the above positive results with a fine-grained lower bound for
the quality of any LSH family for angular distance. Our lower bound implies
that the above LSH family exhibits a trade-off between evaluation time and
quality that is close to optimal for a natural class of LSH functions.Comment: 22 pages, an extended abstract is to appear in the proceedings of the
29th Annual Conference on Neural Information Processing Systems (NIPS 2015
On the Combinatorial Complexity of Approximating Polytopes
Approximating convex bodies succinctly by convex polytopes is a fundamental
problem in discrete geometry. A convex body of diameter
is given in Euclidean -dimensional space, where is a constant. Given an
error parameter , the objective is to determine a polytope of
minimum combinatorial complexity whose Hausdorff distance from is at most
. By combinatorial complexity we mean the
total number of faces of all dimensions of the polytope. A well-known result by
Dudley implies that facets suffice, and a dual
result by Bronshteyn and Ivanov similarly bounds the number of vertices, but
neither result bounds the total combinatorial complexity. We show that there
exists an approximating polytope whose total combinatorial complexity is
, where conceals a
polylogarithmic factor in . This is a significant improvement
upon the best known bound, which is roughly .
Our result is based on a novel combination of both old and new ideas. First,
we employ Macbeath regions, a classical structure from the theory of convexity.
The construction of our approximating polytope employs a new stratified
placement of these regions. Second, in order to analyze the combinatorial
complexity of the approximating polytope, we present a tight analysis of a
width-based variant of B\'{a}r\'{a}ny and Larman's economical cap covering.
Finally, we use a deterministic adaptation of the witness-collector technique
(developed recently by Devillers et al.) in the context of our stratified
construction.Comment: In Proceedings of the 32nd International Symposium Computational
Geometry (SoCG 2016) and accepted to SoCG 2016 special issue of Discrete and
Computational Geometr
Three Puzzles on Mathematics, Computation, and Games
In this lecture I will talk about three mathematical puzzles involving
mathematics and computation that have preoccupied me over the years. The first
puzzle is to understand the amazing success of the simplex algorithm for linear
programming. The second puzzle is about errors made when votes are counted
during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure
Minimum Convex Partitions and Maximum Empty Polytopes
Let be a set of points in . A Steiner convex partition
is a tiling of with empty convex bodies. For every integer ,
we show that admits a Steiner convex partition with at most tiles. This bound is the best possible for points in general
position in the plane, and it is best possible apart from constant factors in
every fixed dimension . We also give the first constant-factor
approximation algorithm for computing a minimum Steiner convex partition of a
planar point set in general position. Establishing a tight lower bound for the
maximum volume of a tile in a Steiner convex partition of any points in the
unit cube is equivalent to a famous problem of Danzer and Rogers. It is
conjectured that the volume of the largest tile is .
Here we give a -approximation algorithm for computing the
maximum volume of an empty convex body amidst given points in the
-dimensional unit box .Comment: 16 pages, 4 figures; revised write-up with some running times
improve
A geometric approach to archetypal analysis and non-negative matrix factorization
Archetypal analysis and non-negative matrix factorization (NMF) are staples
in a statisticians toolbox for dimension reduction and exploratory data
analysis. We describe a geometric approach to both NMF and archetypal analysis
by interpreting both problems as finding extreme points of the data cloud. We
also develop and analyze an efficient approach to finding extreme points in
high dimensions. For modern massive datasets that are too large to fit on a
single machine and must be stored in a distributed setting, our approach makes
only a small number of passes over the data. In fact, it is possible to obtain
the NMF or perform archetypal analysis with just two passes over the data.Comment: 36 pages, 13 figure
- …