85 research outputs found

    PotLLL: A Polynomial Time Version of LLL With Deep Insertions

    Full text link
    Lattice reduction algorithms have numerous applications in number theory, algebra, as well as in cryptanalysis. The most famous algorithm for lattice reduction is the LLL algorithm. In polynomial time it computes a reduced basis with provable output quality. One early improvement of the LLL algorithm was LLL with deep insertions (DeepLLL). The output of this version of LLL has higher quality in practice but the running time seems to explode. Weaker variants of DeepLLL, where the insertions are restricted to blocks, behave nicely in practice concerning the running time. However no proof of polynomial running time is known. In this paper PotLLL, a new variant of DeepLLL with provably polynomial running time, is presented. We compare the practical behavior of the new algorithm to classical LLL, BKZ as well as blockwise variants of DeepLLL regarding both the output quality and running time.Comment: 17 pages, 8 figures; extended version of arXiv:1212.5100 [cs.CR

    Probabilistic Analysis of LLL Reduced Bases

    Get PDF
    LLL reduction, originally founded in 1982 to factor certain polynomials, is a useful tool in public key cryptanalysis. The search for short lattice vectors helps determining the practical hardness of lattice problems, which are supposed to be secure against quantum computer attacks. It is a fact that in practice, the LLL algorithm finds much shorter vectors than its theoretic analysis guarantees. Therefore one can see that the guaranteed worst case bounds are not helpful for practical purposes. We use a probabilistic approach to give an estimate for the length of the shortest vector in an LLL-reduced bases that is tighter than the worst case bounds

    Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding

    Full text link
    Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm (i.e., successive interference cancelation (SIC)). To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during pre-processing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is two-fold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32

    On the Proximity Factors of Lattice Reduction-Aided Decoding

    Full text link
    Lattice reduction-aided decoding features reduced decoding complexity and near-optimum performance in multi-input multi-output communications. In this paper, a quantitative analysis of lattice reduction-aided decoding is presented. To this aim, the proximity factors are defined to measure the worst-case losses in distances relative to closest point search (in an infinite lattice). Upper bounds on the proximity factors are derived, which are functions of the dimension nn of the lattice alone. The study is then extended to the dual-basis reduction. It is found that the bounds for dual basis reduction may be smaller. Reasonably good bounds are derived in many cases. The constant bounds on proximity factors not only imply the same diversity order in fading channels, but also relate the error probabilities of (infinite) lattice decoding and lattice reduction-aided decoding.Comment: remove redundant figure

    Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    Get PDF
    The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems of communications and cryptography. Kannan's \emph{embedding technique} is a powerful technique for solving the approximate CVP, yet its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a \emph{bounded distance decoding} (BDD) viewpoint. We present two complementary analyses of the embedding technique: We establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lov\'asz (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided SIC). The former analysis helps to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ1/(2γ)\lambda_1/(2\gamma), where λ1\lambda_1 is the minimum distance of the lattice, and γO(2n/4)\gamma \approx O(2^{n/4}). This substantially improves the previously best known correct decoding bound γO(2n)\gamma \approx {O}(2^{n}). Focusing on the applications of BDD to decoding of multiple-input multiple-output (MIMO) systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT), and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.Comment: To appear in IEEE Transactions on Information Theor

    Algorithms for the approximate common divisor problem

    Get PDF
    The security of several homomorphic encryption schemes depends on the hardness of the Approximate Common Divisor (ACD) problem. In this paper we review and compare existing algorithms to solve the ACD problem using lattices. In particular we consider the simultaneous Diophantine approximation method, the orthogonal lattice method, and a method based on multivariate polynomials and Coppersmith\u27s algorithm that was studied in detail by Cohn and Heninger. One of our main goals is to compare the multivariate polynomial approach with other methods. We find that the multivariate polynomial approach is not better than the orthogonal lattice algorithm for practical cryptanalysis. Another contribution is to consider a sample-amplification technique for ACD samples, and to consider a pre-processing algorithm similar to the Blum-Kalai-Wasserman (BKW) algorithm for learning parity with noise. We explain why, unlike in other settings, the BKW algorithm does not give an improvement over the lattice algorithms. This is the full version of a paper published at ANTS-XII in 2016

    Accelerating lattice reduction with FPGAs

    Get PDF
    International audienceWe describe an FPGA accelerator for the Kannan­–Fincke­–Pohst enumeration algorithm (KFP) solving the Shortest Lattice Vector Problem (SVP). This is the first FPGA implementation of KFP specifically targeting cryptographically relevant dimensions. In order to optimize this implementation, we theoretically and experimentally study several facets of KFP, including its efficient parallelization and its underlying arithmetic. Our FPGA accelerator can be used for both solving stand-alone instances of SVP (within a hybrid CPU­–FPGA compound) or myriads of smaller dimensional SVP instances arising in a BKZ-type algorithm. For devices of comparable costs, our FPGA implementation is faster than a multi-core CPU implementation by a factor around 2.12
    corecore