214 research outputs found

    PotLLL: A Polynomial Time Version of LLL With Deep Insertions

    Full text link
    Lattice reduction algorithms have numerous applications in number theory, algebra, as well as in cryptanalysis. The most famous algorithm for lattice reduction is the LLL algorithm. In polynomial time it computes a reduced basis with provable output quality. One early improvement of the LLL algorithm was LLL with deep insertions (DeepLLL). The output of this version of LLL has higher quality in practice but the running time seems to explode. Weaker variants of DeepLLL, where the insertions are restricted to blocks, behave nicely in practice concerning the running time. However no proof of polynomial running time is known. In this paper PotLLL, a new variant of DeepLLL with provably polynomial running time, is presented. We compare the practical behavior of the new algorithm to classical LLL, BKZ as well as blockwise variants of DeepLLL regarding both the output quality and running time.Comment: 17 pages, 8 figures; extended version of arXiv:1212.5100 [cs.CR

    PotLLL: a polynomial time version of LLL with deep insertions

    Get PDF
    Lattice reduction algorithms have numerous applications in number theory, algebra, as well as in cryptanalysis. The most famous algorithm for lattice reduction is the LLL algorithm. In polynomial time it computes a reduced basis with provable output quality. One early improvement of the LLL algorithm was LLL with deep insertions (DeepLLL). The output of this version of LLL has higher quality in practice but the running time seems to explode. Weaker variants of DeepLLL, where the insertions are restricted to blocks, behave nicely in practice concerning the running time. However no proof of polynomial running time is known. In this paper PotLLL, a new variant of DeepLLL with provably polynomial running time, is presented. We compare the practical behavior of the new algorithm to classical LLL, BKZ as well as blockwise variants of DeepLLL regarding both the output quality and running time

    Reduction algorithms for the cryptanalysis of lattice based asymmetrical cryptosystems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2008Includes bibliographical references (leaves: 79-91)Text in English; Abstract: Turkish and Englishxi, 119 leavesThe theory of lattices has attracted a great deal of attention in cryptology in recent years. Several cryptosystems are constructed based on the hardness of the lattice problems such as the shortest vector problem and the closest vector problem. The aim of this thesis is to study the most commonly used lattice basis reduction algorithms, namely Lenstra Lenstra Lovasz (LLL) and Block Kolmogorov Zolotarev (BKZ) algorithms, which are utilized to approximately solve the mentioned lattice based problems.Furthermore, the most popular variants of these algorithms in practice are evaluated experimentally by varying the common reduction parameter delta in order to propose some practical assessments about the effect of this parameter on the process of basis reduction.These kind of practical assessments are believed to have non-negligible impact on the theory of lattice reduction, and so the cryptanalysis of lattice cryptosystems, due to thefact that the contemporary nature of the reduction process is mainly controlled by theheuristics

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page

    A Greedy Global Framework for LLL

    Get PDF
    LLL-style lattice reduction algorithms iteratively employ size reduction and reordering on ordered basis vectors to find progressively shorter, more orthogonal vectors. These algorithms work with a designated measure of basis quality and perform reordering by inserting a vector in an earlier position depending on the basis quality before and after reordering. DeepLLL was introduced alongside the BKZ reduction algorithm, however the latter has emerged as the state-of-the-art and has therefore received greater attention. We first show that LLL-style algorithms iteratively improve a basis quality measure; specifically that DeepLLL improves a sublattice measure based on the generalised Lovász condition. We then introduce a new generic framework for lattice reduction algorithms, working with some quality measure X. We instantiate our framework with two quality measures - basis potential (Pot) and squared sum (SS) - both of which have corresponding DeepLLL algorithms. We prove polynomial runtimes for our X-GGLLL algorithms and guarantee their output quality. We run two types of experiments (implementations provided publicly) to compare performances of LLL, X-DeepLLL, X-GGLLL; with multi-precision arithmetic using overestimated floating point precision for standalone comparison with no preprocessing, and with standard datatypes using LLL-preprocessed inputs. In preprocessed comparison, we also compare with BKZ. In standalone comparison, our GGLLL algorithms produce better quality bases whilst being much faster than the corresponding DeepLLL versions. The runtime of SS-GGLLL is only second to LLL in our standalone comparison. SS-GGLLL is significantly faster than the FPLLL implementation of BKZ-12 at all dimensions and outputs better quality bases dimensions 100 onward

    An efficient algorithm for integer lattice reduction

    Full text link
    A lattice of integers is the collection of all linear combinations of a set of vectors for which all entries of the vectors are integers and all coefficients in the linear combinations are also integers. Lattice reduction refers to the problem of finding a set of vectors in a given lattice such that the collection of all integer linear combinations of this subset is still the entire original lattice and so that the Euclidean norms of the subset are reduced. The present paper proposes simple, efficient iterations for lattice reduction which are guaranteed to reduce the Euclidean norms of the basis vectors (the vectors in the subset) monotonically during every iteration. Each iteration selects the basis vector for which projecting off (with integer coefficients) the components of the other basis vectors along the selected vector minimizes the Euclidean norms of the reduced basis vectors. Each iteration projects off the components along the selected basis vector and efficiently updates all information required for the next iteration to select its best basis vector and perform the associated projections.Comment: 29 pages, 20 figure

    Adaptive Lattice Reduction in MIMO Systems

    Get PDF
    In multiple-input multiple-output (MIMO) systems, the use of lattice reduction methods such as the one proposed by Lenstra-Lenstra-Lovasz (LLL) significantly improves the performance of the suboptimal solutions like zero-forcing (ZF) and zero-forcing deceision-feedback-equalizer (ZF-DFE). Today's high rate data communication demands faster lattice reduction methods. Taking advantage of the temporal correlation of a Rayleigh fading channel, a new method is proposed to reduce the complexity of the lattice reduction methods. The proposed method achieves the same error performance as the original lattice reduction methods, but significantly reduces the complexity of lattice reduction algorithm. The proposed method can be used in any MIMO scenario, such as the MIMO detection, and broadcast cases, which are studied in this work

    Quantum Hall Physics - hierarchies and CFT techniques

    Full text link
    The fractional quantum Hall effect, being one of the most studied phenomena in condensed matter physics during the past thirty years, has generated many groundbreaking new ideas and concepts. Very early on it was realized that the zoo of emerging states of matter would need to be understood in a systematic manner. The first attempts to do this, by Haldane and Halperin, set an agenda for further work which has continued to this day. Since that time the idea of hierarchies of quasiparticles condensing to form new states has been a pillar of our understanding of fractional quantum Hall physics. In the thirty years that have passed since then, a number of new directions of thought have advanced our understanding of fractional quantum Hall states, and have extended it in new and unexpected ways. Among these directions is the extensive use of topological quantum field theories and conformal field theories, the application of the ideas of composite bosons and fermions, and the study of nonabelian quantum Hall liquids. This article aims to present a comprehensive overview of this field, including the most recent developments.Comment: added section on experimental status, 59 pages+references, 3 figure

    3nj Morphogenesis and Semiclassical Disentangling

    Full text link
    Recoupling coefficients (3nj symbols) are unitary transformations between binary coupled eigenstates of N=(n+1) mutually commuting SU(2) angular momentum operators. They have been used in a variety of applications in spectroscopy, quantum chemistry and nuclear physics and quite recently also in quantum gravity and quantum computing. These coefficients, naturally associated to cubic Yutsis graphs, share a number of intriguing combinatorial, algebraic, and analytical features that make them fashinating objects to be studied on their own. In this paper we develop a bottom--up, systematic procedure for the generation of 3nj from 3(n-1)j diagrams by resorting to diagrammatical and algebraic methods. We provide also a novel approach to the problem of classifying various regimes of semiclassical expansions of 3nj coefficients (asymptotic disentangling of 3nj diagrams) for n > 2 by means of combinatorial, analytical and numerical tools
    • …
    corecore