3 research outputs found

    From approximate to exact integer programming

    Full text link
    Approximate integer programming is the following: For a convex body KβŠ†RnK \subseteq \mathbb{R}^n, either determine whether K∩ZnK \cap \mathbb{Z}^n is empty, or find an integer point in the convex body scaled by 22 from its center of gravity cc. Approximate integer programming can be solved in time 2O(n)2^{O(n)} while the fastest known methods for exact integer programming run in time 2O(n)β‹…nn2^{O(n)} \cdot n^n. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point xβˆ—βˆˆ(K∩Zn)x^* \in (K \cap \mathbb{Z}^n) can be found in time 2O(n)2^{O(n)}, provided that the remainders of each component xiβˆ—mod  ℓx_i^* \mod{\ell} for some arbitrarily fixed β„“β‰₯5(n+1)\ell \geq 5(n+1) of xβˆ—x^* are given. The algorithm is based on a cutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a 2O(n)nn2^{O(n)}n^n algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (2012) that is considerably more involved. Our algorithm also relies on a new asymmetric approximate Carath\'eodory theorem that might be of interest on its own. Our second method concerns integer programming problems in equation-standard form Ax=b,0≀x≀u, x∈ZnAx = b, 0 \leq x \leq u, \, x \in \mathbb{Z}^n . Such a problem can be reduced to the solution of ∏iO(log⁑ui+1)\prod_i O(\log u_i +1) approximate integer programming problems. This implies, for example that knapsack or subset-sum problems with polynomial variable range 0≀xi≀p(n)0 \leq x_i \leq p(n) can be solved in time (log⁑n)O(n)(\log n)^{O(n)}. For these problems, the best running time so far was nnβ‹…2O(n)n^n \cdot 2^{O(n)}

    Lattice sparsification and the Approximate Closest Vector Problem

    Get PDF
    We give a deterministic algorithm for solving the (1+\eps)-approximate Closest Vector Problem (CVP) on any nn-dimensional lattice and in any near-symmetric norm in 2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the "AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a \poly(n)-space and 2O(n)2^{O(n)}-time algorithm for exact CVP in the β„“2\ell_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for "sparsifying" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for the purpose of proving hardness for the Shortest Vector Problem (SVP) under β„“p\ell_p norms. A preliminary version of this paper appeared in the Proc. 24th Annual ACM-SIAM Symp. on Discrete Algorithms (SODA'13) (http://dx.doi.org/10.1137/1.9781611973105.78)
    corecore