30 research outputs found

    On the Closest Vector Problem with a Distance Guarantee

    Get PDF
    We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any α<1/2\alpha < 1/2, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most α\alpha times the length of the shortest nonzero lattice vector, requires as preprocessing advice only N≈O~(nexp⁥(α2n/(1−2α)2))N \approx \widetilde{O}(n \exp(\alpha^2 n /(1-2\alpha)^2)) vectors, and runs in time O~(nN)\widetilde{O}(nN). As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of O(n/log⁥n)O(n/\sqrt{\log n}) for search CVPP, improving on the previous best of O(n1.5)O(n^{1.5}) due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of O(n).Comment: An early version of the paper was titled "On Bounded Distance Decoding and the Closest Vector Problem with Preprocessing". Conference on Computational Complexity (2014

    Reduction algorithms for the cryptanalysis of lattice based asymmetrical cryptosystems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2008Includes bibliographical references (leaves: 79-91)Text in English; Abstract: Turkish and Englishxi, 119 leavesThe theory of lattices has attracted a great deal of attention in cryptology in recent years. Several cryptosystems are constructed based on the hardness of the lattice problems such as the shortest vector problem and the closest vector problem. The aim of this thesis is to study the most commonly used lattice basis reduction algorithms, namely Lenstra Lenstra Lovasz (LLL) and Block Kolmogorov Zolotarev (BKZ) algorithms, which are utilized to approximately solve the mentioned lattice based problems.Furthermore, the most popular variants of these algorithms in practice are evaluated experimentally by varying the common reduction parameter delta in order to propose some practical assessments about the effect of this parameter on the process of basis reduction.These kind of practical assessments are believed to have non-negligible impact on the theory of lattice reduction, and so the cryptanalysis of lattice cryptosystems, due to thefact that the contemporary nature of the reduction process is mainly controlled by theheuristics

    Fast Lattice Point Enumeration with Minimal Overhead

    Get PDF
    Enumeration algorithms are the best currently known methods to solve lattice problems, both in theory (within the class of polynomial space algorithms), and in practice (where they are routinely used to evaluate the concrete security of lattice cryptography). However, there is an uncomfortable gap between our theoretical understanding and practical performance of lattice point enumeration algorithms. The algorithms typically used in practice have worst-case asymptotic running time 2O(n2)2^{O(n^2)}, but perform extremely well in practice, at least for all values of the lattice dimension for which experimentation is feasible. At the same time, theoretical algorithms (Kannan, Mathematics of Operation Research 12(3):415-440, 1987) are asymptotically superior (achieving 2O(nlog⁥n)2^{O(n \log n)} running time), but they are never used in practice because they incur a substantial overhead that makes them uncompetitive for all reasonable values of the lattice dimension nn. This gap is especially troublesome when algorithms are run in practice to evaluate the concrete security of a cryptosystem, and then experimental results are extrapolated to much larger dimension where solving lattice problems is computationally infeasible. We introduce a new class of (polynomial space) lattice enumeration algorithms that simultaneously achieve asymptotic efficiency (meeting the theoretical nO(n)=2O(nlog⁥n)n^{O(n)} = 2^{O(n \log n)} time bound) and practicality, matching or surpassing the performance of practical algorithms already in moderately low dimension. Key technical contributions that allow us to achieve this result are a new analysis technique that allows us to greatly reduce the number of recursive calls performed during preprocessing (from super exponential in nn to single exponential, or even polynomial in nn), a new enumeration technique that can be directly applied to projected lattice (basis) vectors, without the need to remove linear dependencies, and a modified block basis reduction method with fast (logarithmic) convergence properties. The last technique is used to obtain a new SVP enumeration procedure with O~(nn/2e)\tilde O(n^{n/2e}) running time, matching (even in the constant in the exponent) the optimal worst-case analysis (Hanrot and Stehlë, CRYPTO 2007) of Kannan\u27s theoretical algorithm, but with far superior performance in practice. We complement our theoretical analysis with a comprehensive set of experiments that not only support our practicality claims, but also allow to estimate the cross-over point between different versions of enumeration algorithms, as well as asymptotically faster (but not quite practical) algorithms running in single exponential 2O(n)2^{O(n)} time and space

    Lattice-based cryptography

    Get PDF

    A Lattice Basis Reduction Approach for the Design of Finite Wordlength FIR Filters

    Get PDF
    International audienceMany applications of finite impulse response (FIR) digital filters impose strict format constraints for the filter coefficients. Such requirements increase the complexity of determining optimal designs for the problem at hand. We introduce a fast and efficient method, based on the computation of good nodes for polynomial interpolation and Euclidean lattice basis reduction. Experiments show that it returns quasi-optimal finite wordlength FIR filters; compared to previous approaches it also scales remarkably well (length 125 filters are treated in < 9s). It also proves useful for accelerating the determination of optimal finite wordlength FIR filters

    On finding dense sub-lattices as low energy states of a quantum Hamiltonian

    Full text link
    Lattice-based cryptography has emerged as one of the most prominent candidates for post-quantum cryptography, projected to be secure against the imminent threat of large-scale fault-tolerant quantum computers. The Shortest Vector Problem (SVP) is to find the shortest non-zero vector in a given lattice. It is fundamental to lattice-based cryptography and believed to be hard even for quantum computers. We study a natural generalization of the SVP known as the KK-Densest Sub-lattice Problem (KK-DSP): to find the densest KK-dimensional sub-lattice of a given lattice. We formulate KK-DSP as finding the first excited state of a Z-basis Hamiltonian, making KK-DSP amenable to investigation via an array of quantum algorithms, including Grover search, quantum Gibbs sampling, adiabatic, and Variational Quantum Algorithms. The complexity of the algorithms depends on the basis through which the input lattice is presented. We present a classical polynomial-time algorithm that takes an arbitrary input basis and preprocesses it into inputs suited to quantum algorithms. With preprocessing, we prove that O(KN2)O(KN^2) qubits suffice for solving KK-DSP for NN dimensional input lattices. We empirically demonstrate the performance of a Quantum Approximate Optimization Algorithm KK-DSP solver for low dimensions, highlighting the influence of a good preprocessed input basis. We then discuss the hardness of KK-DSP in relation to the SVP, to see if there is reason to build post-quantum cryptography on KK-DSP. We devise a quantum algorithm that solves KK-DSP with run-time exponent (5KNlog⁥N)/2(5KN\log{N})/2. Therefore, for fixed KK, KK-DSP is no more than polynomially harder than the SVP

    Topics in Lattice Sieving

    Get PDF

    On the concrete hardness of Learning with Errors

    Get PDF
    Abstract. The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and propose a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.
    corecore