1,837 research outputs found

    A New Algorithm for Solving Ring-LPN with a Reducible Polynomial

    Full text link
    The LPN (Learning Parity with Noise) problem has recently proved to be of great importance in cryptology. A special and very useful case is the RING-LPN problem, which typically provides improved efficiency in the constructed cryptographic primitive. We present a new algorithm for solving the RING-LPN problem in the case when the polynomial used is reducible. It greatly outperforms previous algorithms for solving this problem. Using the algorithm, we can break the Lapin authentication protocol for the proposed instance using a reducible polynomial, in about 2^70 bit operations

    New Algorithms for Solving LPN

    Get PDF
    The intractability of solving the LPN problem serves as the security source of many lightweight/post-quantum cryptographic schemes proposed over the past decade. There are several algorithms available so far to fulfill the solving task. In this paper, we present further algorithmic improvements to the existing work. We describe the first efficient algorithm for the single-list kk-sum problem which naturally arises from the various BKW reduction settings, propose the hybrid mode of BKW reduction and show how to compute the matrix multiplication in the Gaussian elimination step with flexible and reduced time/memory complexities. The new algorithms yield the best known tradeoffs on the %time/memory/data complexity curve and clearly compromise the 8080-bit security of the LPN instances suggested in cryptographic schemes. Practical experiments on reduced LPN instances verified our results

    On solving LPN using BKW and variants Implementation and Analysis

    Get PDF
    The Learning Parity with Noise problem (LPN) is appealing in cryptography as it is considered to remain hard in the post-quantum world. It is also a good candidate for lightweight devices due to its simplicity. In this paper we provide a comprehensive analysis of the existing LPN solving algorithms, both for the general case and for the sparse secret scenario. In practice, the LPN-based cryptographic constructions use as a reference the security parameters proposed by Levieil and Fouque. But, for these parameters, there remains a gap between the theoretical analysis and the practical complexities of the algorithms we consider. The new theoretical analysis in this paper provides tighter bounds on the complexity of LPN solving algorithms and narrows this gap between theory and practice. We show that for a sparse secret there is another algorithm that outperforms BKW and its variants. Following from our results, we further propose practical parameters for different security levels

    LPN in Cryptography:an Algorithmic Study

    Get PDF
    The security of public-key cryptography relies on well-studied hard problems, problems for which we do not have efficient algorithms. Factorization and discrete logarithm are the two most known and used hard problems. Unfortunately, they can be easily solved on a quantum computer by Shor's algorithm. Also, the research area of cryptography demands for crypto-diversity which says that we should offer a range of hard problems for public-key cryptography. If one hard problem proves to be easy, we should be able to provide alternative solutions. Some of the candidates for post-quantum hard problems, i.e. problems which are believed to be hard even on a quantum computer, are the Learning Parity with Noise (LPN), the Learning with Errors (LWE) and the Shortest Vector Problem (SVP). A thorough study of these problems is needed in order to assess their hardness. In this thesis we focus on the algorithmic study of LPN. LPN is a hard problem that is attractive, as it is believed to be post-quantum resistant and suitable for lightweight devices. In practice, it has been employed in several encryption schemes and authentication protocols. At the beginning of this thesis, we take a look at the existing LPN solving algorithms. We provide the theoretical analysis that assesses their complexity. We compare the theoretical results with practice by implementing these algorithms. We study the efficiency of all LPN solving algorithms which allow us to provide secure parameters that can be used in practice. We push further the state of the art by improving the existing algorithms with the help of two new frameworks. In the first framework, we split an LPN solving algorithm into atomic steps. We study their complexity, how they impact the other steps and we construct an algorithm that optimises their use. Given an LPN instance that is characterized by the noise level and the secret size, our algorithm provides the steps to follow in order to solve the instance with optimal complexity. In this way, we can assess if an LPN instance provides the security we require and we show what are the secure instances for the applications that rely on LPN. The second framework handles problems that can be decomposed into steps of equal complexity. Here, we assume that we have an adversary that has access to a finite or infinite number of instances of the same problem. The goal of the adversary is to succeed in just one instance as soon as possible. Our framework provides the strategy that achieves this. We characterize an LPN solving algorithm in this framework and show that we can improve its complexity in the scenario where the adversary is restricted. We show that other problems, like password guessing, can be modeled in the same framework

    An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices

    Get PDF
    In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus q=n2q = n^2, Gaussian noise α=1/(n/πlog⁥2n)\alpha = 1/(\sqrt{n/\pi} \log^2 n) and binary secret, using 2282^{28} samples, while the previous best result based on BKW claims a time complexity of 2742^{74} with 2602^{60} samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2(ln⁥2/2+o(1))n/log⁥log⁥n2^{(\ln 2/2+o(1))n/\log \log n}. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1)o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201

    Trusted-HB: a low-cost version of HB+ secure against Man-in-The-Middle attacks

    Full text link
    Since the introduction at Crypto'05 by Juels and Weis of the protocol HB+, a lightweight protocol secure against active attacks but only in a detection based-model, many works have tried to enhance its security. We propose here a new approach to achieve resistance against Man-in-The-Middle attacks. Our requirements - in terms of extra communications and hardware - are surprisingly low.Comment: submitted to IEEE Transactions on Information Theor

    Regularized Optimal Transport and the Rot Mover's Distance

    Full text link
    This paper presents a unified framework for smooth convex regularization of discrete optimal transport problems. In this context, the regularized optimal transport turns out to be equivalent to a matrix nearness problem with respect to Bregman divergences. Our framework thus naturally generalizes a previously proposed regularization based on the Boltzmann-Shannon entropy related to the Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We call the regularized optimal transport distance the rot mover's distance in reference to the classical earth mover's distance. We develop two generic schemes that we respectively call the alternate scaling algorithm and the non-negative alternate scaling algorithm, to compute efficiently the regularized optimal plans depending on whether the domain of the regularizer lies within the non-negative orthant or not. These schemes are based on Dykstra's algorithm with alternate Bregman projections, and further exploit the Newton-Raphson method when applied to separable divergences. We enhance the separable case with a sparse extension to deal with high data dimensions. We also instantiate our proposed framework and discuss the inherent specificities for well-known regularizers and statistical divergences in the machine learning and information geometry communities. Finally, we demonstrate the merits of our methods with experiments using synthetic data to illustrate the effect of different regularizers and penalties on the solutions, as well as real-world data for a pattern recognition application to audio scene classification
    • 

    corecore