5,599 research outputs found

    Gradual sub-lattice reduction and a new complexity for factoring polynomials

    Get PDF
    We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well

    Rounding and Chaining LLL: Finding Faster Small Roots of Univariate Polynomial Congruences

    Get PDF
    International audienceIn a seminal work at EUROCRYPT '96, Coppersmith showed how to find all small roots of a univariate polynomial congruence in polynomial time: this has found many applications in public-key cryptanalysis and in a few security proofs. However, the running time of the algorithm is a high-degree polynomial, which limits experiments: the bottleneck is an LLL reduction of a high-dimensional matrix with extra-large coefficients. We present in this paper the first significant speedups over Coppersmith's algorithm. The first speedup is based on a special property of the matrices used by Coppersmith's algorithm, which allows us to provably speed up the LLL reduction by rounding, and which can also be used to improve the complexity analysis of Coppersmith's original algorithm. The exact speedup depends on the LLL algorithm used: for instance, the speedup is asymptotically quadratic in the bit-size of the small-root bound if one uses the Nguyen-Stehlé L2 algorithm. The second speedup is heuristic and applies whenever one wants to enlarge the root size of Coppersmith's algorithm by exhaustive search. Instead of performing several LLL reductions independently, we exploit hidden relationships between these matrices so that the LLL reductions can be somewhat chained to decrease the global running time. When both speedups are combined, the new algorithm is in practice hundreds of times faster for typical parameters

    Quantum LLL with an Application to Mersenne Number Cryptosystems

    Get PDF
    In this work we analyze the impact of translating the well-known LLL algorithm for lattice reduction into the quantum setting. We present the first (to the best of our knowledge) quantum circuit representation of a lattice reduction algorithm in the form of explicit quantum circuits implementing the textbook LLL algorithm. Our analysis identifies a set of challenges arising from constructing reversible lattice reduction as well as solutions to these challenges. We give a detailed resource estimate with the Toffoli gate count and the number of logical qubits as complexity metrics. As an application of the previous, we attack Mersenne number cryptosystems by Groverizing an attack due to Beunardeau et. al that uses LLL as a subprocedure. While Grover\u27s quantum algorithm promises a quadratic speedup over exhaustive search given access to a oracle that distinguishes solutions from non-solutions, we show that in our case, realizing the oracle comes at the cost of a large number of qubits. When an adversary translates the attack by Beunardeau et al. into the quantum setting, the overhead of the quantum LLL circuit may be as large as 2522^{52} qubits for the text-book implementation and 2332^{33} for a floating-point variant

    Certified lattice reduction

    Get PDF
    Quadratic form reduction and lattice reduction are fundamental tools in computational number theory and in computer science, especially in cryptography. The celebrated Lenstra-Lenstra-Lov\'asz reduction algorithm (so-called LLL) has been improved in many ways through the past decades and remains one of the central methods used for reducing integral lattice basis. In particular, its floating-point variants-where the rational arithmetic required by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic-are now the fastest known. However, the systematic study of the reduction theory of real quadratic forms or, more generally, of real lattices is not widely represented in the literature. When the problem arises, the lattice is usually replaced by an integral approximation of (a multiple of) the original lattice, which is then reduced. While practically useful and proven in some special cases, this method doesn't offer any guarantee of success in general. In this work, we present an adaptive-precision version of a generalized LLL algorithm that covers this case in all generality. In particular, we replace floating-point arithmetic by Interval Arithmetic to certify the behavior of the algorithm. We conclude by giving a typical application of the result in algebraic number theory for the reduction of ideal lattices in number fields.Comment: 23 page
    • 

    corecore