74 research outputs found

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/122

    Sieve algorithms for the shortest vector problem are practical

    Get PDF
    The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+ϵ)^n polynomial-time operations, and whose space requirement is (4/3+ ϵ)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions

    Sieve algorithms for the shortest vector problem are practical

    Get PDF
    The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+ϵ)^n polynomial-time operations, and whose space requirement is (4/3+ ϵ)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions

    Improved Algorithms for the Shortest Vector Problem and the Closest Vector Problem in the Infinity Norm

    Get PDF
    Blomer and Naewe[BN09] modified the randomized sieving algorithm of Ajtai, Kumar and Sivakumar[AKS01] to solve the shortest vector problem (SVP). The algorithm starts with N=2O(n)N = 2^{O(n)} randomly chosen vectors in the lattice and employs a sieving procedure to iteratively obtain shorter vectors in the lattice. The running time of the sieving procedure is quadratic in NN. We study this problem for the special but important case of the ℓ∞\ell_\infty norm. We give a new sieving procedure that runs in time linear in NN, thereby significantly improving the running time of the algorithm for SVP in the ℓ∞\ell_\infty norm. As in [AKS02,BN09], we also extend this algorithm to obtain significantly faster algorithms for approximate versions of the shortest vector problem and the closest vector problem (CVP) in the ℓ∞\ell_\infty norm. We also show that the heuristic sieving algorithms of Nguyen and Vidick[NV08] and Wang et al.[WLTB11] can also be analyzed in the ℓ∞\ell_{\infty} norm. The main technical contribution in this part is to calculate the expected volume of intersection of a unit ball centred at origin and another ball of a different radius centred at a uniformly random point on the boundary of the unit ball. This might be of independent interest.Comment: Changed the titl

    Faster Sieving for Shortest Lattice Vectors Using Spherical Locality-Sensitive Hashing

    Get PDF
    Recently, it was shown that angular locality-sensitive hashing (LSH) can be used to significantly speed up lattice sieving, leading to heuristic time and space complexities for solving the shortest vector problem (SVP) of 20.3366n+o(n)2^{0.3366n + o(n)}. We study the possibility of applying other LSH methods to sieving, and show that with the recent spherical LSH method of Andoni et al.\ we can heuristically solve SVP in time and space 20.2972n+o(n)2^{0.2972n + o(n)}. We further show that a practical variant of the resulting SphereSieve is very similar to Wang et al.'s two-level sieve, with the key difference that we impose an order on the outer list of centers. Keywords: lattices, shortest vector problem, sieving algorithms, (approximate) nearest neighbor problem, locality-sensitive hashin

    Quantum Lattice Sieving

    Full text link
    Lattices are very important objects in the effort to construct cryptographic primitives that are secure against quantum attacks. A central problem in the study of lattices is that of finding the shortest non-zero vector in the lattice. Asymptotically, sieving is the best known technique for solving the shortest vector problem, however, sieving requires memory exponential in the dimension of the lattice. As a consequence, enumeration algorithms are often used in place of sieving due to their linear memory complexity, despite their super-exponential runtime. In this work, we present a heuristic quantum sieving algorithm that has memory complexity polynomial in the size of the length of the sampled vectors at the initial step of the sieve. In other words, unlike most sieving algorithms, the memory complexity of our algorithm does not depend on the number of sampled vectors at the initial step of the sieve.Comment: A reviewer pointed out an error in the amplitude amplification step in the analysis of Theorem 6. While we believe this error can be resolved, we are not sure how to do it at the moment and are taking down this submissio

    Progressive lattice sieving

    Get PDF
    Most algorithms for hard lattice problems are based on the principle of rank reduction: to solve a problem in a dd-dimensional lattice, one first solves one or more problem instances in a sublattice of rank d−1d - 1, and then uses this information to find a solution to the original problem. Existing lattice sieving methods, however, tackle lattice problems such as the shortest vector problem (SVP) directly, and work with the full-rank lattice from the start. Lattice sieving further seems to benefit less from starting with reduced bases than other methods, and finding an approximate solution almost takes as long as finding an exact solution. These properties currently set sieving apart from other methods. In this work we consider a progressive approach to lattice sieving, where we gradually introduce new basis vectors only when the sieve has stabilized on the previous basis vectors. This leads to improved (heuristic) guarantees on finding approximate shortest vectors, a bigger practical impact of the quality of the basis on the run-time, better memory management, a smoother and more predictable behavior of the algorithm, and significantly faster convergence - compared to traditional approaches, we save between a factor 2020 to 4040 in the time complexity for SVP
    • …
    corecore