787 research outputs found

    A fully classical LLL algorithm for modules

    Get PDF
    The celebrated LLL algorithm for Euclidean lattices is central to cryptanalysis of well- known and deployed protocols as it provides approximate solutions to the Shortest Vector Problem (SVP). Recent interest in algebrically structured lattices (e.g., for the efficient implementation of lattice- based cryptography) has prompted adapations of LLL to such structured lattices, and, in particular, to module lattices, i.e., lattices that are modules over algebraic ring extensions of the integers. One of these adaptations is a quantum algorithm proposed by Lee, Pellet-Mary, Stehlé and Wallet (Asiacrypt 2019). In this work, we dequantize the algorithm of Lee et al., and provide a fully classical LLL-type algorithm for arbitrary module lattices that achieves same SVP approximation factors, single exponential in the rank of the input module. Just like the algorithm of Lee et al., our algorithm runs in polynomial time given an oracle that solves the Closest Vector Problem (CVP) in a certain, fixed lattice L_K that depends only on the number field K

    Certified lattice reduction

    Get PDF
    Quadratic form reduction and lattice reduction are fundamental tools in computational number theory and in computer science, especially in cryptography. The celebrated Lenstra-Lenstra-Lov\'asz reduction algorithm (so-called LLL) has been improved in many ways through the past decades and remains one of the central methods used for reducing integral lattice basis. In particular, its floating-point variants-where the rational arithmetic required by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic-are now the fastest known. However, the systematic study of the reduction theory of real quadratic forms or, more generally, of real lattices is not widely represented in the literature. When the problem arises, the lattice is usually replaced by an integral approximation of (a multiple of) the original lattice, which is then reduced. While practically useful and proven in some special cases, this method doesn't offer any guarantee of success in general. In this work, we present an adaptive-precision version of a generalized LLL algorithm that covers this case in all generality. In particular, we replace floating-point arithmetic by Interval Arithmetic to certify the behavior of the algorithm. We conclude by giving a typical application of the result in algebraic number theory for the reduction of ideal lattices in number fields.Comment: 23 page

    Testing isomorphism of lattices over CM-orders

    Full text link
    A CM-order is a reduced order equipped with an involution that mimics complex conjugation. The Witt-Picard group of such an order is a certain group of ideal classes that is closely related to the "minus part" of the class group. We present a deterministic polynomial-time algorithm for the following problem, which may be viewed as a special case of the principal ideal testing problem: given a CM-order, decide whether two given elements of its Witt-Picard group are equal. In order to prevent coefficient blow-up, the algorithm operates with lattices rather than with ideals. An important ingredient is a technique introduced by Gentry and Szydlo in a cryptographic context. Our application of it to lattices over CM-orders hinges upon a novel existence theorem for auxiliary ideals, which we deduce from a result of Konyagin and Pomerance in elementary number theory.Comment: To appear in SIAM Journal on Computin

    Algebraic Approach to Physical-Layer Network Coding

    Full text link
    The problem of designing physical-layer network coding (PNC) schemes via nested lattices is considered. Building on the compute-and-forward (C&F) relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, an algebraic approach is taken to show its potential in practical, non-asymptotic, settings. A general framework is developed for studying nested-lattice-based PNC schemes---called lattice network coding (LNC) schemes for short---by making a direct connection between C&F and module theory. In particular, a generic LNC scheme is presented that makes no assumptions on the underlying nested lattice code. C&F is re-interpreted in this framework, and several generalized constructions of LNC schemes are given. The generic LNC scheme naturally leads to a linear network coding channel over modules, based on which non-coherent network coding can be achieved. Next, performance/complexity tradeoffs of LNC schemes are studied, with a particular focus on hypercube-shaped LNC schemes. The error probability of this class of LNC schemes is largely determined by the minimum inter-coset distances of the underlying nested lattice code. Several illustrative hypercube-shaped LNC schemes are designed based on Construction A and D, showing that nominal coding gains of 3 to 7.5 dB can be obtained with reasonable decoding complexity. Finally, the possibility of decoding multiple linear combinations is considered and related to the shortest independent vectors problem. A notion of dominant solutions is developed together with a suitable lattice-reduction-based algorithm.Comment: Submitted to IEEE Transactions on Information Theory, July 21, 2011. Revised version submitted Sept. 17, 2012. Final version submitted July 3, 201

    Reductions from module lattices to free module lattices, and application to dequantizing module-LLL

    Get PDF
    In this article, we give evidence that free modules (i.e., modules which admit a basis) are no weaker than arbitrary modules, when it comes to solving cryptographic algorithmic problems (and when the rank of the module is at least 2). More precisely, we show that for three algorithmic problems used in cryptography, namely the shortest vector problem, the Hermite shortest vector problem and a variant of the closest vector problem, there is a reduction from solving the problem in any module of rank n≄2n ≄ 2 to solving the problem in any free module of the same rank nn. As an application, we show that this can be used to dequantize the LLL algorithm for module lattices presented by Lee et al. (Asiacrypt 2019)

    Gradual sub-lattice reduction and a new complexity for factoring polynomials

    Get PDF
    We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well
    • 

    corecore