109 research outputs found

    On the Quantitative Hardness of CVP

    Full text link
    \newcommand{\eps}{\varepsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} For odd integers pβ‰₯1p \geq 1 (and p=∞p = \infty), we show that the Closest Vector Problem in the β„“p\ell_p norm (\CVP_p) over rank nn lattices cannot be solved in 2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of pβ‰₯1p \geq 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of \CVP_2 (i.e., \CVP in the Euclidean norm), for which a 2n+o(n)2^{n +o(n)}-time algorithm is known. In particular, our result applies for any p=p(n)β‰ 2p = p(n) \neq 2 that approaches 22 as nβ†’βˆžn \to \infty. We also show a similar SETH-hardness result for \SVP_\infty; hardness of approximating \CVP_p to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for \CVP_p and \CVPP_p for any 1≀p<∞1 \leq p < \infty under different assumptions

    Approximate Voronoi cells for lattices, revisited

    Get PDF
    We revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis-Laarhoven-De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than 20.076d+o(d)2^{0.076d + o(d)} memory, and we show how to obtain time-memory trade-offs even when using less than 20.048d+o(d)2^{0.048d + o(d)} memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as dd/2+o(d)d^{d/2 + o(d)}, matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem.Comment: 18 pages, 1 figur

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page

    Solving the Closest Vector Problem in 2n2^n Time--- The Discrete Gaussian Strikes Again!

    Get PDF
    We give a 2n+o(n)2^{n+o(n)}-time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on nn-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic O~(4n)\widetilde{O}(4^{n})-time and O~(2n)\widetilde{O}(2^{n})-space algorithm of Micciancio and Voulgaris. We achieve our main result in three steps. First, we show how to modify the sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian sampling over lattice shifts, Lβˆ’tL- t, with very low parameters. While the actual algorithm is a natural generalization of [ADRS15], the analysis uses substantial new ideas. This yields a 2n+o(n)2^{n+o(n)}-time algorithm for approximate CVP for any approximation factor Ξ³=1+2βˆ’o(n/log⁑n)\gamma = 1+2^{-o(n/\log n)}. Second, we show that the approximate closest vectors to a target vector tt can be grouped into "lower-dimensional clusters," and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that "behaves well with these clusters." Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP. The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest

    Sieve algorithms for the shortest vector problem are practical

    Get PDF
    The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+Ο΅)^n polynomial-time operations, and whose space requirement is (4/3+ Ο΅)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions

    Sieve algorithms for the shortest vector problem are practical

    Get PDF
    The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+Ο΅)^n polynomial-time operations, and whose space requirement is (4/3+ Ο΅)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions

    Sieve, Enumerate, Slice, and Lift: Hybrid Lattice Algorithms for SVP via CVPP

    Get PDF
    Motivated by recent results on solving large batches of closest vector problem (CVP) instances, we study how these techniques can be combined with lattice enumeration to obtain faster methods for solving the shortest vector problem (SVP) on high-dimensional lattices. Theoretically, under common heuristic assumptions we show how to solve SVP in dimension dd with a cost proportional to running a sieve in dimension dβˆ’Ξ˜(d/log⁑d)d - \Theta(d / \log d), resulting in a 2Θ(d/log⁑d)2^{\Theta(d / \log d)} speedup and memory reduction compared to running a full sieve. Combined with techniques from [Ducas, Eurocrypt 2018] we can asymptotically get a total of [log⁑(13/9)+o(1)]β‹…d/log⁑d[\log(13/9) + o(1)] \cdot d / \log d dimensions \textit{for free} for solving SVP. Practically, the main obstacles for observing a speedup in moderate dimensions appear to be that the leading constant in the Θ(d/log⁑d)\Theta(d / \log d) term is rather small; that the overhead of the (batched) slicer may be large; and that competitive enumeration algorithms heavily rely on aggressive pruning techniques, which appear to be incompatible with our algorithms. These obstacles prevented this asymptotic speedup (compared to full sieving) from being observed in our experiments. However, it could be expected to become visible once optimized CVPP techniques are used in higher dimensional experiments
    • …
    corecore