243 research outputs found

    Lattice sparsification and the Approximate Closest Vector Problem

    Get PDF
    We give a deterministic algorithm for solving the (1+\eps)-approximate Closest Vector Problem (CVP) on any nn-dimensional lattice and in any near-symmetric norm in 2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the "AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a \poly(n)-space and 2O(n)2^{O(n)}-time algorithm for exact CVP in the 2\ell_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for "sparsifying" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for the purpose of proving hardness for the Shortest Vector Problem (SVP) under p\ell_p norms. A preliminary version of this paper appeared in the Proc. 24th Annual ACM-SIAM Symp. on Discrete Algorithms (SODA'13) (http://dx.doi.org/10.1137/1.9781611973105.78)

    Lattice Sparsification and the approximate closest vector problem

    Get PDF
    We give a deterministic algorithm for solving the (1+\eps)-approximate Closest Vector Problem (CVP) on any nn-dimensional lattice and in any near-symmetric norm in 2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the "AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a \poly(n)-space and 2O(n)2^{O(n)}-time algorithm for exact CVP in the 2\ell_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for "sparsifying" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for the purpose of proving hardness for the Shortest Vector Problem (SVP) under p\ell_p norms. A preliminary version of this paper appeared in the Proc. 24th Annual ACM-SIAM Symp. on Discrete Algorithms (SODA'13) (http://dx.doi.org/10.1137/1.9781611973105.78)

    Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One

    Get PDF
    We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any γ1+O(logn/n)\gamma \leq 1 + O(\log n/n), we obtain an efficient dimension-preserving reduction from γO(n/logn)\gamma^{O(n/\log n)}-SVP to γ\gamma-GapSVP and an efficient dimension-preserving reduction from γO(n)\gamma^{O(n)}-CVP to γ\gamma-GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when γ=1\gamma = 1. For SVP, we actually obtain something slightly stronger than a search-to-decision reduction---we reduce γO(n/logn)\gamma^{O(n/\log n)}-SVP to γ\gamma-unique SVP, a potentially easier problem than γ\gamma-GapSVP.Comment: Updated to acknowledge additional prior wor

    On the Closest Vector Problem with a Distance Guarantee

    Get PDF
    We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any α<1/2\alpha < 1/2, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most α\alpha times the length of the shortest nonzero lattice vector, requires as preprocessing advice only NO~(nexp(α2n/(12α)2))N \approx \widetilde{O}(n \exp(\alpha^2 n /(1-2\alpha)^2)) vectors, and runs in time O~(nN)\widetilde{O}(nN). As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of O(n/logn)O(n/\sqrt{\log n}) for search CVPP, improving on the previous best of O(n1.5)O(n^{1.5}) due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of O(n).Comment: An early version of the paper was titled "On Bounded Distance Decoding and the Closest Vector Problem with Preprocessing". Conference on Computational Complexity (2014

    On the Quantitative Hardness of CVP

    Full text link
    \newcommand{\eps}{\varepsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} For odd integers p1p \geq 1 (and p=p = \infty), we show that the Closest Vector Problem in the p\ell_p norm (\CVP_p) over rank nn lattices cannot be solved in 2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of p1p \geq 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of \CVP_2 (i.e., \CVP in the Euclidean norm), for which a 2n+o(n)2^{n +o(n)}-time algorithm is known. In particular, our result applies for any p=p(n)2p = p(n) \neq 2 that approaches 22 as nn \to \infty. We also show a similar SETH-hardness result for \SVP_\infty; hardness of approximating \CVP_p to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for \CVP_p and \CVPP_p for any 1p<1 \leq p < \infty under different assumptions

    Improved Reduction from the Bounded Distance Decoding Problem to the Unique Shortest Vector Problem in Lattices

    Get PDF
    We present a probabilistic polynomial-time reduction from the lattice Bounded Distance Decoding (BDD) problem with parameter 1/( sqrt(2) * gamma) to the unique Shortest Vector Problem (uSVP) with parameter gamma for any gamma > 1 that is polynomial in the lattice dimension n. It improves the BDD to uSVP reductions of [Lyubashevsky and Micciancio, CRYPTO, 2009] and [Liu, Wang, Xu and Zheng, Inf. Process. Lett., 2014], which rely on Kannan\u27s embedding technique. The main ingredient to the improvement is the use of Khot\u27s lattice sparsification [Khot, FOCS, 2003] before resorting to Kannan\u27s embedding, in order to boost the uSVP parameter

    Improved Hardness of BDD and SVP Under Gap-(S)ETH

    Get PDF
    We show improved fine-grained hardness of two key lattice problems in the p\ell_p norm: Bounded Distance Decoding to within an α\alpha factor of the minimum distance (BDDp,α\mathrm{BDD}_{p, \alpha}) and the (decisional) γ\gamma-approximate Shortest Vector Problem (SVPp,γ\mathrm{SVP}_{p,\gamma}), assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH). Specifically, we show: 1. For all p[1,)p \in [1, \infty), there is no 2o(n)2^{o(n)}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αkn\alpha > \alpha_\mathsf{kn}, where αkn=2ckn<0.98491\alpha_\mathsf{kn} = 2^{-c_\mathsf{kn}} < 0.98491 and cknc_\mathsf{kn} is the 2\ell_2 kissing-number constant, unless non-uniform Gap-ETH is false. 2. For all p[1,)p \in [1, \infty), there is no 2o(n)2^{o(n)}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αp\alpha > \alpha^\ddagger_p, where αp\alpha^\ddagger_p is explicit and satisfies αp=1\alpha^\ddagger_p = 1 for 1p21 \leq p \leq 2, αp2\alpha^\ddagger_p 2, and αp1/2\alpha^\ddagger_p \to 1/2 as pp \to \infty, unless randomized Gap-ETH is false. 3. For all p[1,)2Zp \in [1, \infty) \setminus 2 \mathbb{Z} and all C>1C > 1, there is no 2n/C2^{n/C}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αp,C\alpha > \alpha^\dagger_{p, C}, where αp,C\alpha^\dagger_{p, C} is explicit and satisfies αp,C1\alpha^\dagger_{p, C} \to 1 as CC \to \infty for any fixed p[1,)p \in [1, \infty), unless non-uniform Gap-SETH is false. 4. For all p>p02.1397p > p_0 \approx 2.1397, p2Zp \notin 2\mathbb{Z}, and all C>CpC > C_p, there is no 2n/C2^{n/C}-time algorithm for SVPp,γ\mathrm{SVP}_{p, \gamma} for some constant γ>1\gamma > 1, where Cp>1C_p > 1 is explicit and satisfies Cp1C_p \to 1 as pp \to \infty, unless randomized Gap-SETH is false.Comment: ITCS 202

    A Size-Free CLT for Poisson Multinomials and its Applications

    Full text link
    An (n,k)(n,k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of nn independent random vectors supported on the set Bk={e1,,ek}{\cal B}_k=\{e_1,\ldots,e_k\} of standard basis vectors in Rk\mathbb{R}^k. We show that any (n,k)(n,k)-PMD is poly(kσ){\rm poly}\left({k\over \sigma}\right)-close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same first two moments, removing the dependence on nn from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the structural characterization of PMDs shown in recent work by Daskalakis, Kamath, and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on nn and 1/ε1/\varepsilon of the best known algorithm for two-strategy anonymous games. Our new CLT also enables the construction of covers for the set of (n,k)(n,k)-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman, and Srivastava. Our cover size lower bound is based on an algebraic geometric construction. Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from Ok(1/ε2)O_k(1/\varepsilon^2) samples in polyk(1/ε){\rm poly}_k(1/\varepsilon)-time, removing the quasi-polynomial dependence of the running time on 1/ε1/\varepsilon from the algorithm of Daskalakis, Kamath, and Tzamos.Comment: To appear in STOC 201
    corecore