10 research outputs found

    Second order statistical behavior of LLL and BKZ

    Get PDF
    The LLL algorithm (from Lenstra, Lenstra and Lovász) and its generalization BKZ (from Schnorr and Euchner) are widely used in cryptanalysis, especially for lattice-based cryptography. Precisely understanding their behavior is crucial for deriving appropriate key-size for cryptographic schemes subject to lattice-reduction attacks. Current models, e.g. the Geometric Series Assumption and Chen-Nguyen’s BKZ-simulator, have provided a decent first-order analysis of the behavior of LLL and BKZ. However, they only focused on the average behavior and were not perfectly accurate. In this work, we initiate a second order analysis of this behavior. We confirm and quantify discrepancies between models and experiments —in particular in the head and tail regions— and study their consequences. We also provide variations around the mean and correlations statistics, and study their impact. While mostly based on experiments, by pointing at and quantifying unaccounted phenomena, our study sets the ground for a theoretical and predictive understanding of LLL and BKZ performances at the second order

    On Dual Lattice Attacks Against Small-Secret LWE and Parameter Choices in HElib and SEAL

    Get PDF
    We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL v2.0 yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of (2L)/(2L+1)(2\,L)/(2\,L+1) when logq=Θ(Llogn)\log q = \Theta{\left(L \log n\right)}, when the secret has constant hamming weight hh and where LL is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of 2h2^{h} operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with n=1024n=1024 and log2q47\log_2 q \approx {47}, while the techniques described in this work lead to estimated costs of 68 bits (SEAL v2.0) and 62 bits (HElib)

    The Convergence of Slide-type Reductions

    Get PDF
    In this work we apply the dynamical systems analysis of Hanrot et al. (CRYPTO\u2711) to a class of lattice block reduction algorithms that includes (natural variants of) slide reduction and block-Rankin reduction. This implies sharper bounds on the polynomial running times (in the query model) for these algorithms and opens the door to faster practical variants of slide reduction. We give heuristic arguments showing that such variants can indeed speed up slide reduction significantly in practice. This is confirmed by experimental evidence, which also shows that our variants are competitive with state-of-the-art reduction algorithms

    On the concrete hardness of Learning with Errors

    Get PDF
    Abstract. The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and propose a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

    Improving Convergence and Practicality of Slide-type Reductions

    Get PDF
    The best lattice reduction algorithm known in theory for approximating the Shortest Vector Problem (SVP) over lattices is the slide reduction algorithm (STOC \u2708 & CRYPTO \u2720). In this paper, we first improve the running time analysis of computing slide-reduced bases based on potential functions. This analysis applies to a generic slide reduction algorithm that includes (natural variants of) slide reduction and block-Rankin reduction (ANTS \u2714). We then present a rigorous dynamic analysis of generic slide reduction using techniques originally applied to a variant of BKZ (CRYPTO \u2711). This provides guarantees on the quality of the current lattice basis during execution. This dynamic analysis not only implies sharper convergence for these algorithms to find a short nonzero vector (rather than a fully reduced basis), but also allows to heuristically model/trace the practical behaviour of slide reduction. Interestingly, this dynamic analysis inspires us to introduce a new slide reduction variant with better time/quality trade-offs. This is confirmed by both our experiments and simulation, which also show that our variant is competitive with state-of-the-art reduction algorithms. To the best of our knowledge, this work is the first attempt of improving the practical performance of slide reduction beyond speeding up the SVP oracle

    Practical, Predictable Lattice Basis Reduction

    Get PDF
    Lattice reduction algorithms are notoriously hard to predict, both in terms of running time and output quality, which poses a major problem for cryptanalysis. While easy to analyze algorithms with good worst-case behavior exist, previous experimental evidence suggests that they are outperformed in practice by algorithms whose behavior is still not well understood, despite more than 30 years of intensive research. This has lead to a situation where a rather complex simulation procedure seems to be the most common way to predict the result of their application to an instance. In this work we present new algorithmic ideas towards bridging this gap between theory and practice. We report on an extensive experimental study of several lattice reduction algorithms, both novel and from the literature, that shows that theoretical algorithms are in fact surprisingly practical and competitive. In light of our results we come to the conclusion that in order to predict lattice reduction, simulation is superfluous and can be replaced by a closed formula using weaker assumptions. One key technique to achieving this goal is a novel algorithm to solve the Shortest Vector Problem (SVP) in the dual without computing the dual basis. Our algorithm enjoys the same practical efficiency as the corresponding primal algorithm and can be easily added to an existing implementation of it

    On the concrete hardness of Learning with Errors

    Get PDF
    The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem

    Lattice Point Enumeration on Block Reduced Bases

    No full text
    When analyzing lattice based cryptosystems, we often need to solve the Shortest Vector Problem (SVP) in some lattice associated to the system under scrutiny. The go-to algorithms in practice to solve SVP are enumeration algorithms, which usually consist of a preprocessing step, followed by an exhaustive search. Obviously, the two steps offer a trade-off and should be balanced in their running time in order to minimize the overall complexity. In practice, the most common approach to control this trade-off is to use block reduction algorithms during the preprocessing. Despite the popularity of this approach, it lacks any well founded analysis and all practical approaches seem to use ad hoc parameters. This weakens our confidence in the cryptanalysis of the systems. In this work, we aim to shed light on at least one side of this trade-off and analyze the effect of block reduction on the exhaustive search. For this, we give asymptotic worst case bounds and presents results from both experiments and simulation that show its average case behavior in practice

    Parameter selection in lattice-based cryptography

    Get PDF
    corecore