25,318 research outputs found

    Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One

    Get PDF
    We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any γ1+O(logn/n)\gamma \leq 1 + O(\log n/n), we obtain an efficient dimension-preserving reduction from γO(n/logn)\gamma^{O(n/\log n)}-SVP to γ\gamma-GapSVP and an efficient dimension-preserving reduction from γO(n)\gamma^{O(n)}-CVP to γ\gamma-GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when γ=1\gamma = 1. For SVP, we actually obtain something slightly stronger than a search-to-decision reduction---we reduce γO(n/logn)\gamma^{O(n/\log n)}-SVP to γ\gamma-unique SVP, a potentially easier problem than γ\gamma-GapSVP.Comment: Updated to acknowledge additional prior wor

    New Shortest Lattice Vector Problems of Polynomial Complexity

    Full text link
    The Shortest Lattice Vector (SLV) problem is in general hard to solve, except for special cases (such as root lattices and lattices for which an obtuse superbase is known). In this paper, we present a new class of SLV problems that can be solved efficiently. Specifically, if for an nn-dimensional lattice, a Gram matrix is known that can be written as the difference of a diagonal matrix and a positive semidefinite matrix of rank kk (for some constant kk), we show that the SLV problem can be reduced to a kk-dimensional optimization problem with countably many candidate points. Moreover, we show that the number of candidate points is bounded by a polynomial function of the ratio of the smallest diagonal element and the smallest eigenvalue of the Gram matrix. Hence, as long as this ratio is upper bounded by a polynomial function of nn, the corresponding SLV problem can be solved in polynomial complexity. Our investigations are motivated by the emergence of such lattices in the field of Network Information Theory. Further applications may exist in other areas.Comment: 13 page

    Solving the Closest Vector Problem in 2n2^n Time--- The Discrete Gaussian Strikes Again!

    Get PDF
    We give a 2n+o(n)2^{n+o(n)}-time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on nn-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic O~(4n)\widetilde{O}(4^{n})-time and O~(2n)\widetilde{O}(2^{n})-space algorithm of Micciancio and Voulgaris. We achieve our main result in three steps. First, we show how to modify the sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian sampling over lattice shifts, LtL- t, with very low parameters. While the actual algorithm is a natural generalization of [ADRS15], the analysis uses substantial new ideas. This yields a 2n+o(n)2^{n+o(n)}-time algorithm for approximate CVP for any approximation factor γ=1+2o(n/logn)\gamma = 1+2^{-o(n/\log n)}. Second, we show that the approximate closest vectors to a target vector tt can be grouped into "lower-dimensional clusters," and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that "behaves well with these clusters." Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP. The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest

    On the Lattice Isomorphism Problem

    Full text link
    We study the Lattice Isomorphism Problem (LIP), in which given two lattices L_1 and L_2 the goal is to decide whether there exists an orthogonal linear transformation mapping L_1 to L_2. Our main result is an algorithm for this problem running in time n^{O(n)} times a polynomial in the input size, where n is the rank of the input lattices. A crucial component is a new generalized isolation lemma, which can isolate n linearly independent vectors in a given subset of Z^n and might be useful elsewhere. We also prove that LIP lies in the complexity class SZK.Comment: 23 pages, SODA 201

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page
    corecore