21 research outputs found

    The nearest-colattice algorithm

    Get PDF
    In this work, we exhibit a hierarchy of polynomial time algorithms solving approximate variants of the Closest Vector Problem (CVP). Our first contribution is a heuristic algorithm achieving the same distance tradeoff as HSVP algorithms, namely ≈βn2βcovol(Λ)1n\approx \beta^{\frac{n}{2\beta}}\textrm{covol}(\Lambda)^{\frac{1}{n}} for a random lattice Λ\Lambda of rank nn. Compared to the so-called Kannan's embedding technique, our algorithm allows using precomputations and can be used for efficient batch CVP instances. This implies that some attacks on lattice-based signatures lead to very cheap forgeries, after a precomputation. Our second contribution is a proven reduction from approximating the closest vector with a factor ≈n32β3n2β\approx n^{\frac32}\beta^{\frac{3n}{2\beta}} to the Shortest Vector Problem (SVP) in dimension β\beta.Comment: 19 pages, presented at the Algorithmic Number Theory Symposium (ANTS 2020

    Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    Get PDF
    The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems of communications and cryptography. Kannan's \emph{embedding technique} is a powerful technique for solving the approximate CVP, yet its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a \emph{bounded distance decoding} (BDD) viewpoint. We present two complementary analyses of the embedding technique: We establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lov\'asz (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided SIC). The former analysis helps to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ1/(2γ)\lambda_1/(2\gamma), where λ1\lambda_1 is the minimum distance of the lattice, and γ≈O(2n/4)\gamma \approx O(2^{n/4}). This substantially improves the previously best known correct decoding bound γ≈O(2n)\gamma \approx {O}(2^{n}). Focusing on the applications of BDD to decoding of multiple-input multiple-output (MIMO) systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT), and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.Comment: To appear in IEEE Transactions on Information Theor

    A Fast Phase-Based Enumeration Algorithm for SVP Challenge through y-Sparse Representations of Short Lattice Vectors

    Get PDF
    In this paper, we propose a new phase-based enumeration algorithm based on two interesting and useful observations for y-sparse representations of short lattice vectors in lattices from SVP challenge benchmarks. Experimental results show that the phase-based algorithm greatly outperforms other famous enumeration algorithms in running time and achieves higher dimensions, like the Kannan-Helfrich enumeration algorithm. Therefore, the phase-based algorithm is a practically excellent solver for the shortest vector problem (SVP)

    Second order statistical behavior of LLL and BKZ

    Get PDF
    The LLL algorithm (from Lenstra, Lenstra and Lovász) and its generalization BKZ (from Schnorr and Euchner) are widely used in cryptanalysis, especially for lattice-based cryptography. Precisely understanding their behavior is crucial for deriving appropriate key-size for cryptographic schemes subject to lattice-reduction attacks. Current models, e.g. the Geometric Series Assumption and Chen-Nguyen’s BKZ-simulator, have provided a decent first-order analysis of the behavior of LLL and BKZ. However, they only focused on the average behavior and were not perfectly accurate. In this work, we initiate a second order analysis of this behavior. We confirm and quantify discrepancies between models and experiments —in particular in the head and tail regions— and study their consequences. We also provide variations around the mean and correlations statistics, and study their impact. While mostly based on experiments, by pointing at and quantifying unaccounted phenomena, our study sets the ground for a theoretical and predictive understanding of LLL and BKZ performances at the second order

    New Public-Key Crypto-System EHT

    Get PDF
    In this note, an LWE problem with a hidden trapdoor is introduced. It is used to construct an efficient public-key crypto-system EHT. The new system is significantly different from LWE based NIST candidates like FrodoKEM. The performance of EHT compares favorably with FrodoKEM

    PotLLL: a polynomial time version of LLL with deep insertions

    Get PDF
    Lattice reduction algorithms have numerous applications in number theory, algebra, as well as in cryptanalysis. The most famous algorithm for lattice reduction is the LLL algorithm. In polynomial time it computes a reduced basis with provable output quality. One early improvement of the LLL algorithm was LLL with deep insertions (DeepLLL). The output of this version of LLL has higher quality in practice but the running time seems to explode. Weaker variants of DeepLLL, where the insertions are restricted to blocks, behave nicely in practice concerning the running time. However no proof of polynomial running time is known. In this paper PotLLL, a new variant of DeepLLL with provably polynomial running time, is presented. We compare the practical behavior of the new algorithm to classical LLL, BKZ as well as blockwise variants of DeepLLL regarding both the output quality and running time

    A sieve algorithm based on overlattices

    Get PDF
    International audienceIn this paper, we present a heuristic algorithm for solving exact, as well as approximate, shortest vector and closest vector problems on lattices. The algorithm can be seen as a modified sieving algorithm for which the vectors of the intermediate sets lie in overlattices or translated cosets of overlattices. The key idea is hence no longer to work with a single lattice but to move the problems around in a tower of related lattices. We initiate the algorithm by sampling very short vectors in an overlattice of the original lattice that admits a quasi-orthonormal basis and hence an efficient enumeration of vectors of bounded norm. Taking sums of vectors in the sample, we construct short vectors in the next lattice. Finally, we obtain solution vector(s) in the initial lattice as a sum of vectors of an overlattice. The complexity analysis relies on the Gaussian heuristic. This heuristic is backed by experiments in low and high dimensions that closely reflect these estimates when solving hard lattice problems in the average case.This new approach allows us to solve not only shortest vector problems, but also closest vector problems, in lattices of dimension nn in time 20.3774n2^{0.3774n} using memory 20.2925n2^{0.2925n}. Moreover, the algorithm is straightforward to parallelize on most computer architectures

    Bounding basis reduction properties

    Get PDF
    The paper describes improved analysis techniques for basis reduction that allow one to prove strong complexity bounds and reduced basis guarantees for traditional reduction algorithms and some of their variants. This is achieved by a careful exploitation of the linear equations and inequalities relating various bit sizes before and after one or more reduction steps
    corecore