123 research outputs found

    A new Lenstra-type Algorithm for Quasiconvex Polynomial Integer Minimization with Complexity 2^O(n log n)

    Full text link
    We study the integer minimization of a quasiconvex polynomial with quasiconvex polynomial constraints. We propose a new algorithm that is an improvement upon the best known algorithm due to Heinz (Journal of Complexity, 2005). This improvement is achieved by applying a new modern Lenstra-type algorithm, finding optimal ellipsoid roundings, and considering sparse encodings of polynomials. For the bounded case, our algorithm attains a time-complexity of s (r l M d)^{O(1)} 2^{2n log_2(n) + O(n)} when M is a bound on the number of monomials in each polynomial and r is the binary encoding length of a bound on the feasible region. In the general case, s l^{O(1)} d^{O(n)} 2^{2n log_2(n) +O(n)}. In each we assume d>= 2 is a bound on the total degree of the polynomials and l bounds the maximum binary encoding size of the input.Comment: 28 pages, 10 figure

    On the Lattice Distortion Problem

    Get PDF
    We introduce and study the \emph{Lattice Distortion Problem} (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a nO(logn)n^{O(\log n)} factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a 2O(nloglogn/logn)2^{O(n \log \log n/\log n)} factor of optimal in polynomial time and within a nO(logn)n^{O(\log n)} factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.Comment: This is the full version of a paper that appeared in ESA 201

    Lattice sparsification and the Approximate Closest Vector Problem

    Get PDF
    We give a deterministic algorithm for solving the (1+\eps)-approximate Closest Vector Problem (CVP) on any nn-dimensional lattice and in any near-symmetric norm in 2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the "AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a \poly(n)-space and 2O(n)2^{O(n)}-time algorithm for exact CVP in the 2\ell_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for "sparsifying" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for the purpose of proving hardness for the Shortest Vector Problem (SVP) under p\ell_p norms. A preliminary version of this paper appeared in the Proc. 24th Annual ACM-SIAM Symp. on Discrete Algorithms (SODA'13) (http://dx.doi.org/10.1137/1.9781611973105.78)
    corecore