30 research outputs found
Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings
We give a 2O(n)(1+1/")n time and poly(n)-space deterministic algorithm for computing a (1+")n
approximation to the volume of a general convex body K, which comes close to matching the
(1+c/")n/2 lower bound for volume estimation in the oracle model by BĂĄrĂĄny and FĂŒredi (STOC
1986, Proc. Amer. Math. Soc. 1988). This improves on the previous results of Dadush and
Vempala (Proc. Natâl Acad. Sci. 2013), which gave the above result only for symmetric bodies
and achieved a dependence of 2O(n)(1 + log5/2(1/")/"3)n.
For our methods, we reduce the problem of volume estimation in K to counting lattice points
in K Rn (via enumeration) for a specially constructed lattice L: a so-called thin covering of
space with respect to K (more precisely, for which L + K = Rn and voln(K)/ det(L) = 2O(n)).
The trade off between time and approximation ratio is achieved by scaling down the lattice.
As our main technical contribution, we give the first deterministic 2O(n)-time and poly(n)-
space construction of thin covering lattices for general convex bodies. This improves on a recent
construction of Alon et al. (STOC 2013) which requires exponential space and only works for
symmetric bodies. For our construction, we combine the use of the M-ellipsoid from convex
geometry (Milman, C. R. Math. Acad. Sci. Paris 1986) together with lattice sparsification and
densification techniques (Dadush and Kun, SODA 2013; Rogers, J. London Math. Soc. 1950)
On the Shadow Simplex Method for Curved Polyhedra
We study the simplex method over polyhedra satisfying certain âdiscrete curvatureâ lower bounds,
which enforce that the boundary always meets vertices at sharp angles. Motivated by linear
programs with totally unimodular constraint matrices, recent results of Bonifas et al (SOCG
2012), Brunsch and Röglin (ICALP 2013), and Eisenbrand and Vempala (2014) have improved
our understanding of such polyhedra.
We develop a new type of dual analysis of the shadow simplex method which provides a clean
and powerful tool for improving all previously mentioned results. Our methods are inspired by
the recent work of Bonifas and the first named author [4], who analyzed a remarkably similar
process as part of an algorithm for the Closest Vector Problem with Preprocessing.
For our first result, we obtain a constructive diameter bound of O( n2 ln n ) for n-dimensional polyhedra with curvature parameter 2 [0, 1]. For the class of polyhedra arising from totally
unimodular constraint matrices, this implies a bound of O(n3 ln n). For linear optimization,
given an initial feasible vertex, we show that an optimal vertex can be found using an expected O( n3 ln n ) simplex pivots, each requiring O(mn) time to compute. An initial feasible solutioncan be found using O(mn3 ln n ) pivot steps
Short Paths on the Voronoi Graph and Closest Vector Problem with Preprocessing
Improving on the Voronoi cell based techniques of [28, 24],
we give a Las Vegas eO
(2n) expected time and space algo-
rithm for CVPP (the preprocessing version of the Closest
Vector Problem, CVP). This improves on the eO
(4n) deter-
ministic runtime of the Micciancio Voulgaris algorithm [24]
(henceforth MV) for CVPP 1 at the cost of a polynomial
amount of randomness (which only aects runtime, not cor-
rectness).
As in MV, our algorithm proceeds by computing a short
path on the Voronoi graph of the lattice, where lattice
points are adjacent if their Voronoi cells share a common
facet, from the origin to a closest lattice vector. Our main
technical contribution is a randomized procedure that, given
the Voronoi relevant vectors of a lattice { the lattice vectors
inducing facets of the Voronoi cell { as preprocessing, and
any \close enough" lattice point to the target, computes a
path to a closest lattice vector of expected polynomial size.
This improves on the eO
(2n) path length given by the MV
algorithm. Furthermore, as in MV, each edge of the path
can be computed using a single iteration over the Voronoi
relevant vectors.
As a byproduct of our work, we also give an optimal
relationship between geometric and path distance on the
Voronoi graph, which we believe to be of independent
interest
Lattice sparsification and the Approximate Closest Vector Problem
We give a deterministic algorithm for solving the
(1+\eps)-approximate Closest Vector Problem (CVP) on any
-dimensional lattice and in any near-symmetric norm in
2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm
builds on the lattice point enumeration techniques of Micciancio and
Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala
(FOCS 2011), and gives an elegant, deterministic alternative to the
"AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and
Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the
existence of a \poly(n)-space and -time algorithm for
exact CVP in the norm, the space complexity of our algorithm
can be reduced to polynomial.
Our main technical contribution is a method for "sparsifying" any
input lattice while approximately maintaining its metric structure. To
this end, we employ the idea of random sublattice restrictions, which
was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for
the purpose of proving hardness for the Shortest Vector Problem (SVP)
under norms.
A preliminary version of this paper appeared in the Proc. 24th Annual
ACM-SIAM Symp. on Discrete Algorithms (SODA'13)
(http://dx.doi.org/10.1137/1.9781611973105.78)
AWGN-Goodness is Enough: Capacity-Achieving Lattice Codes based on Dithered Probabilistic Shaping
On the existence of 0/1 polytopes with high semidefinite extension complexity
In RothvoĂ (Math Program 142(1â2):255â268, 2013) it was shown that
there exists a 0/1 polytope (a polytope whose vertices are in {0, 1}n) such that any
higher-dimensional polytope projecting to it must have 2Ω(n) facets, i.e., its linear
extension complexity is exponential. The question whether there exists a 0/1 polytope
with high positive semidefinite extension complexity was left open. We answer this
question in the affirmative by showing that there is a 0/1 polytope such that any spectrahedron
projecting to it must be the intersection of a semidefinite cone of dimension
2Ω(n) and an affine space. Our proof relies on a new technique to rescale semidefinite
factorizations
Smoothed analysis of the simplex method
In this chapter, we give a technical overview of smoothed analyses of the shadow vertex simplex method for linear programming (LP). We first review the properties of the shadow vertex simplex method and its associated geometry. We begin the smoothed analysis discussion with an analysis of the successive shortest path algorithm for the minimum-cost maximum-flow problem under objective perturbations, a classical instantiation of the shadow vertex simplex method. Then we move to general linear programming and give an analysis of a shadow vertex based algorithm for linear programming under Gaussian constraint perturbations
On the complexity of branching proofs
We consider the task of proving integer infeasibility of a bounded convex K in Rn using a general branching proof system. In a general branching proof, one constructs a branching tree by adding an integer disjunction ax †b or ax â„ b + 1, a â Zn, b â Z, at each node, such that the leaves of the tree correspond to empty sets (i.e., K together with the inequalities picked up from the root to leaf is empty). Recently, Beame et al (ITCS 2018), asked whether the bit size of the coefficients in a branching proof, which they named stabbing planes (SP) refutations, for the case of polytopes derived from SAT formulas, can be assumed to be polynomial in n. We resolve this question in the affirmative, by showing that any branching proof can be recompiled so that the normals of the disjunctions have coefficients of size at most (nR)O(n2), where R â N is the radius of an `1 ball containing K, while increasing the number of nodes in the branching tree by at most a factor O(n). Our recompilation techniques works by first replacing each disjunction using an iterated Diophantine approximation, introduced by Frank and Tardos (Combinatorica 1986), and proceeds by âfixing upâ the leaves of the tree using judiciously added ChvĂĄtal-Gomory (CG) cuts. As our second contribution, we show that Tseitin formulas, an important class of infeasible SAT instances, have quasi-polynomial sized cutting plane (CP) refutations. This disproves a conjecture that Tseitin formulas are (exponentially) hard for CP. Our upper bound follows by recompiling the quasi-polynomial sized SP refutations for Tseitin formulas due to Beame et al, which have a special enumerative form, into a CP proof of the same length using a serialization technique of Cook et al (Discrete Appl. Math. 1987). As our final contribution, we give a simple family of polytopes in [0, 1]n requiring exponential sized branching proofs
The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues
A classic result of Banaszczyk (Random Str. & Algor. 1997) states that given any n vectors in Rm with â2-norm at most 1 and any convex body K in Rm of Gaussian measure at least half, there exists a ±1 combination of these vectors that lies in 5K. Banaszczykâs proof of this result was non-constructive and it was open how to find such a ±1 combination in polynomial time. In this paper, we give an efficient randomized algorithm to find a ±1 combination of the vectors which lies in cK for some fixed constant c > 0. This leads to new efficient algorithms for several problems in discrepancy theory
Lattice-based locality sensitive hashing is optimal
Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC â98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes ânearbyâ points to the same bucket and âfar awayâ points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage.
In a seminal work, Andoni and Indyk (FOCS â06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c2 for the â2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA â07) and OâDonnell, Wu and Zhou (TOCT â14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH.
In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem