21 research outputs found
Minimizing the number of lattice points in a translated polygon
The parametric lattice-point counting problem is as follows: Given an integer
matrix , compute an explicit formula parameterized by that determines the number of integer points in the polyhedron . In the last decade, this counting problem has received
considerable attention in the literature. Several variants of Barvinok's
algorithm have been shown to solve this problem in polynomial time if the
number of columns of is fixed.
Central to our investigation is the following question: Can one also
efficiently determine a parameter such that the number of integer points in
is minimized? Here, the parameter can be chosen
from a given polyhedron .
Our main result is a proof that finding such a minimizing parameter is
-hard, even in dimension 2 and even if the parametrization reflects a
translation of a 2-dimensional convex polygon. This result is established via a
relationship of this problem to arithmetic progressions and simultaneous
Diophantine approximation.
On the positive side we show that in dimension 2 there exists a polynomial
time algorithm for each fixed that either determines a minimizing
translation or asserts that any translation contains at most times
the minimal number of lattice points
Obstructions to weak decomposability for simplicial polytopes
Provan and Billera introduced notions of (weak) decomposability of simplicial
complexes as a means of attempting to prove polynomial upper bounds on the
diameter of the facet-ridge graph of a simplicial polytope. Recently, De Loera
and Klee provided the first examples of simplicial polytopes that are not
weakly vertex-decomposable. These polytopes are polar to certain simple
transportation polytopes. In this paper, we refine their analysis to prove that
these -dimensional polytopes are not even weakly -decomposable.
As a consequence, (weak) decomposability cannot be used to prove a polynomial
version of the Hirsch conjecture
Obstructions to weak decomposability for simplicial polytopes
International audienceProvan and Billera introduced notions of (weak) decomposability of simplicial complexes as a means of attempting to prove polynomial upper bounds on the diameter of the facet-ridge graph of a simplicial polytope. Recently, De Loera and Klee provided the first examples of simplicial polytopes that are not weakly vertex-decomposable. These polytopes are polar to certain simple transportation polytopes. In this paper, we refine their analysis to prove that these -dimensional polytopes are not even weakly -decomposable. As a consequence, (weak) decomposability cannot be used to prove a polynomial version of the Hirsch Conjecture
On sub-determinants and the diameter of polyhedra
We derive a new upper bound on the diameter of a polyhedron P = {x \in R^n :
Ax <= b}, where A \in Z^{m\timesn}. The bound is polynomial in n and the
largest absolute value of a sub-determinant of A, denoted by \Delta. More
precisely, we show that the diameter of P is bounded by O(\Delta^2 n^4 log
n\Delta). If P is bounded, then we show that the diameter of P is at most
O(\Delta^2 n^3.5 log n\Delta).
For the special case in which A is a totally unimodular matrix, the bounds
are O(n^4 log n) and O(n^3.5 log n) respectively. This improves over the
previous best bound of O(m^16 n^3 (log mn)^3) due to Dyer and Frieze
Diameter of Polyhedra: Limits of Abstraction
We investigate the diameter of a natural abstraction of the 1-skeleton of polyhedra. Although this abstraction is simpler than other abstractions that were previously studied in the literature, the best upper bounds on the diameter of polyhedra continue to hold here. On the other hand, we show that this abstraction has its limits by providing a superlinear lower bound
Covering Cubes and the Closest Vector Problem
We provide the currently fastest randomized (1+epsilon)-approximation algorithm for the closest vector problem in the infinity-norm. The running time of our method depends on the dimension n and the approximation guarantee epsilon by 2^(O(n))(log(1/epsilon))^(O(n)) which improves upon the (2+1/epsilon)^(O(n)) running time of the previously best algorithm by Blömer and Naewe. Our algorithm is based on a solution of the following geometric covering problem that is of interest of its own: Given epsilon>0, how many ellipsoids are necessary to cover the scaled unit cube [-1+epsilon, 1-epsilon]^n such all ellipsoids are contained in the standard unit cube [-1,1]^n. We provide an almost optimal bound for the case where the ellipsoids are restricted to be axis-parallel. We then apply our covering scheme to a variation of this covering problem where one wants to cover the scaled cube with boxes that, if scaled by two, are still contained in the unit cube. Thereby, we obtain a method to boost any 2-approximation algorithm for closest-vector in the infinity-norm to a (1+epsilon)-approximation algorithm that has the desired running time
Testing additive integrality gaps
We consider the problem of testing whether the maximum additive integrality gap of a family of integer programs in standard form is bounded by a given constant. This can be viewed as a generalization of the integer rounding property, which can be tested in polynomial time if the number of constraints is fixed. It turns out that this generalization is NP-hard even if the number of constraints is fixed. However, if, in addition, the objective is the all-one vector, then one can test in polynomial time whether the additive gap is bounded by a constan
Testing additive integrality gaps
We consider the problem of testing whether the maximum integrality gap of a family of integer programs in standard form is bounded by a given constant. This can be viewed as a generalization of the integer rounding property, which can be tested in polynomial time if the number of constraints is ïŹxed. It turns out that this generalization is NP-hard even if the number of constraints is ïŹxed. However, if, in addition, the objective is the all-one vector, then one can test in polynomial time whether the additive gap is bounded by a constant