113 research outputs found

    Learning with Errors is easy with quantum samples

    Full text link
    Learning with Errors is one of the fundamental problems in computational learning theory and has in the last years become the cornerstone of post-quantum cryptography. In this work, we study the quantum sample complexity of Learning with Errors and show that there exists an efficient quantum learning algorithm (with polynomial sample and time complexity) for the Learning with Errors problem where the error distribution is the one used in cryptography. While our quantum learning algorithm does not break the LWE-based encryption schemes proposed in the cryptography literature, it does have some interesting implications for cryptography: first, when building an LWE-based scheme, one needs to be careful about the access to the public-key generation algorithm that is given to the adversary; second, our algorithm shows a possible way for attacking LWE-based encryption by using classical samples to approximate the quantum sample state, since then using our quantum learning algorithm would solve LWE

    Practically Solving LPN in High Noise Regimes Faster Using Neural Networks

    Get PDF
    We conduct a systematic study of solving the learning parity with noise problem (LPN) using neural networks. Our main contribution is designing families of two-layer neural networks that practically outperform classical algorithms in high-noise, low-dimension regimes. We consider three settings where the numbers of LPN samples are abundant, very limited, and in between. In each setting we provide neural network models that solve LPN as fast as possible. For some settings we are also able to provide theories that explain the rationale of the design of our models. Comparing with the previous experiments of Esser, Kubler, and May (CRYPTO 2017), for dimension n=26n = 26, noise rate Ď„=0.498\tau = 0.498, the ''Guess-then-Gaussian-elimination'' algorithm takes 3.12 days on 64 CPU cores, whereas our neural network algorithm takes 66 minutes on 8 GPUs. Our algorithm can also be plugged into the hybrid algorithms for solving middle or large dimension LPN instances.Comment: 37 page

    Cryptography based on the Hardness of Decoding

    Get PDF
    This thesis provides progress in the fields of for lattice and coding based cryptography. The first contribution consists of constructions of IND-CCA2 secure public key cryptosystems from both the McEliece and the low noise learning parity with noise assumption. The second contribution is a novel instantiation of the lattice-based learning with errors problem which uses uniform errors

    Adaptive learning and cryptography

    Get PDF
    Significant links exist between cryptography and computational learning theory. Cryptographic functions are the usual method of demonstrating significant intractability results in computational learning theory as they can demonstrate that certain problems are hard in a representation independent sense. On the other hand, hard learning problems have been used to create efficient cryptographic protocols such as authentication schemes, pseudo-random permutations and functions, and even public key encryption schemes.;Learning theory / coding theory also impacts cryptography in that it enables cryptographic primitives to deal with the issues of noise or bias in their inputs. Several different constructions of fuzzy primitives exist, a fuzzy primitive being a primitive which functions correctly even in the presence of noisy , or non-uniform inputs. Some examples of these primitives include error-correcting blockciphers, fuzzy identity based cryptosystems, fuzzy extractors and fuzzy sketches. Error correcting blockciphers combine both encryption and error correction in a single function which results in increased efficiency. Fuzzy identity based encryption allows the decryption of any ciphertext that was encrypted under a close enough identity. Fuzzy extractors and sketches are methods of reliably (re)-producing a uniformly random secret key given an imperfectly reproducible string from a biased source, through a public string that is called the sketch .;While hard learning problems have many qualities which make them useful in constructing cryptographic protocols, such as their inherent error tolerance and simple algebraic structure, it is often difficult to utilize them to construct very secure protocols due to assumptions they make on the learning algorithm. Due to these assumptions, the resulting protocols often do not have security against various types of adaptive adversaries. to help deal with this issue, we further examine the inter-relationships between cryptography and learning theory by introducing the concept of adaptive learning . Adaptive learning is a rather weak form of learning in which the learner is not expected to closely approximate the concept function in its entirety, rather it is only expected to answer a query of the learner\u27s choice about the target. Adaptive learning allows for a much weaker learner than in the standard model, while maintaining the the positive properties of many learning problems in the standard model, a fact which we feel makes problems that are hard to adaptively learn more useful than standard model learning problems in the design of cryptographic protocols. We argue that learning parity with noise is hard to do adaptively and use that assumption to construct a related key secure, efficient MAC as well as an efficient authentication scheme. In addition we examine the security properties of fuzzy sketches and extractors and demonstrate how these properties can be combined by using our related key secure MAC. We go on to demonstrate that our extractor can allow a form of related-key hardening for protocols in that, by affecting how the key for a primitive is stored it renders that protocol immune to related key attacks

    Correlated Pseudorandomness from the Hardness of Quasi-Abelian Decoding

    Full text link
    Secure computation often benefits from the use of correlated randomness to achieve fast, non-cryptographic online protocols. A recent paradigm put forth by Boyle et al.\textit{et al.} (CCS 2018, Crypto 2019) showed how pseudorandom correlation generators (PCG) can be used to generate large amounts of useful forms of correlated (pseudo)randomness, using minimal interactions followed solely by local computations, yielding silent secure two-party computation protocols (protocols where the preprocessing phase requires almost no communication). An additional property called programmability allows to extend this to build N-party protocols. However, known constructions for programmable PCG's can only produce OLE's over large fields, and use rather new splittable Ring-LPN assumption. In this work, we overcome both limitations. To this end, we introduce the quasi-abelian syndrome decoding problem (QA-SD), a family of assumptions which generalises the well-established quasi-cyclic syndrome decoding assumption. Building upon QA-SD, we construct new programmable PCG's for OLE's over any field Fq\mathbb{F}_q with q>2q>2. Our analysis also sheds light on the security of the ring-LPN assumption used in Boyle et al.\textit{et al.} (Crypto 2020). Using our new PCG's, we obtain the first efficient N-party silent secure computation protocols for computing general arithmetic circuit over Fq\mathbb{F}_q for any q>2q>2.Comment: This is a long version of a paper accepted at CRYPTO'2

    Notes on Lattice-Based Cryptography

    Get PDF
    Asymmetrisk kryptering er avhengig av antakelsen om at noen beregningsproblemer er vanskelige å løse. I 1994 viste Peter Shor at de to mest brukte beregningsproblemene, nemlig det diskrete logaritmeproblemet og primtallsfaktorisering, ikke lenger er vanskelige å løse når man bruker en kvantedatamaskin. Siden den gang har forskere jobbet med å finne nye beregningsproblemer som er motstandsdyktige mot kvanteangrep for å erstatte disse to. Gitterbasert kryptografi er forskningsfeltet som bruker kryptografiske primitiver som involverer vanskelige problemer definert på gitter, for eksempel det korteste vektorproblemet og det nærmeste vektorproblemet. NTRU-kryptosystemet, publisert i 1998, var et av de første som ble introdusert på dette feltet. Problemet Learning With Error (LWE) ble introdusert i 2005 av Regev, og det regnes nå som et av de mest lovende beregningsproblemene som snart tas i bruk i stor skala. Å studere vanskelighetsgraden og å finne nye og raskere algoritmer som løser den, ble et ledende forskningstema innen kryptografi. Denne oppgaven inkluderer følgende bidrag til feltet: - En ikke-triviell reduksjon av Mersenne Low Hamming Combination Search Problem, det underliggende problemet med et NTRU-lignende kryptosystem, til Integer Linear Programming (ILP). Særlig finner vi en familie av svake nøkler. - En konkret sikkerhetsanalyse av Integer-RLWE, en vanskelig beregningsproblemvariant av LWE, introdusert av Gu Chunsheng. Vi formaliserer et meet-in-the-middle og et gitterbasert angrep for denne saken, og vi utnytter en svakhet ved parametervalget gitt av Gu, for å bygge et forbedret gitterbasert angrep. - En forbedring av Blum-Kalai-Wasserman-algoritmen for å løse LWE. Mer spesifikt, introduserer vi et nytt reduksjonstrinn og en ny gjetteprosedyre til algoritmen. Disse tillot oss å utvikle to implementeringer av algoritmen, som er i stand til å løse relativt store LWE-forekomster. Mens den første effektivt bare bruker RAM-minne og er fullt parallelliserbar, utnytter den andre en kombinasjon av RAM og disklagring for å overvinne minnebegrensningene gitt av RAM. - Vi fyller et tomrom i paringsbasert kryptografi. Dette ved å gi konkrete formler for å beregne hash-funksjon til G2, den andre gruppen i paringsdomenet, for Barreto-Lynn-Scott-familien av paringsvennlige elliptiske kurver.Public-key Cryptography relies on the assumption that some computational problems are hard to solve. In 1994, Peter Shor showed that the two most used computational problems, namely the Discrete Logarithm Problem and the Integer Factoring Problem, are not hard to solve anymore when using a quantum computer. Since then, researchers have worked on finding new computational problems that are resistant to quantum attacks to replace these two. Lattice-based Cryptography is the research field that employs cryptographic primitives involving hard problems defined on lattices, such as the Shortest Vector Problem and the Closest Vector Problem. The NTRU cryptosystem, published in 1998, was one of the first to be introduced in this field. The Learning With Error (LWE) problem was introduced in 2005 by Regev, and it is now considered one of the most promising computational problems to be employed on a large scale in the near future. Studying its hardness and finding new and faster algorithms that solve it became a leading research topic in Cryptology. This thesis includes the following contributions to the field: - A non-trivial reduction of the Mersenne Low Hamming Combination Search Problem, the underlying problem of an NTRU-like cryptosystem, to Integer Linear Programming (ILP). In particular, we find a family of weak keys. - A concrete security analysis of the Integer-RLWE, a hard computational problem variant of LWE introduced by Gu Chunsheng. We formalize a meet-in-the-middle attack and a lattice-based attack for this case, and we exploit a weakness of the parameters choice given by Gu to build an improved lattice-based attack. - An improvement of the Blum-Kalai-Wasserman algorithm to solve LWE. In particular, we introduce a new reduction step and a new guessing procedure to the algorithm. These allowed us to develop two implementations of the algorithm that are able to solve relatively large LWE instances. While the first one efficiently uses only RAM memory and is fully parallelizable, the second one exploits a combination of RAM and disk storage to overcome the memory limitations given by the RAM. - We fill a gap in Pairing-based Cryptography by providing concrete formulas to compute hash-maps to G2, the second group in the pairing domain, for the Barreto-Lynn-Scott family of pairing-friendly elliptic curves.Doktorgradsavhandlin

    Low-Complexity Cryptographic Hash Functions

    Get PDF
    Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is collision resistant hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output

    LPN Decoded

    Get PDF
    We propose new algorithms with small memory consumption for the Learning Parity with Noise (LPN) problem, both classically and quantumly. Our goal is to predict the hardness of LPN depending on both parameters, its dimension kk and its noise rate τ\tau, as accurately as possible both in theory and practice. Therefore, we analyze our algorithms asymptotically, run experiments on medium size parameters and provide bit complexity predictions for large parameters. Our new algorithms are modifications and extensions of the simple Gaussian elimination algorithm with recent advanced techniques for decoding random linear codes. Moreover, we enhance our algorithms by the dimension reduction technique from Blum, Kalai, Wasserman. This results in a hybrid algorithm that is capable for achieving the best currently known run time for any fixed amount of memory. On the asymptotic side, we achieve significant improvements for the run time exponents, both classically and quantumly. To the best of our knowledge, we provide the first quantum algorithms for LPN. Due to the small memory consumption of our algorithms, we are able to solve for the first time LPN instances of medium size, e.g. with k=243,τ=18k=243, \tau = \frac 1 8 in only 15 days on 64 threads. Our algorithms result in bit complexity prediction that require relatively large kk for small τ\tau. For instance for small noise LPN with τ=1k\tau= \frac 1 {\sqrt k}, we predict 8080-bit classical and only 6464-bit quantum security for k ≥ 2048k~\geq~2048. For the common cryptographic choice k=512,τ=18k=512, \tau = \frac 1 8, we achieve with limited memory classically 9797-bit and quantumly 7070-bit security

    New Algorithms for Solving LPN

    Get PDF
    The intractability of solving the LPN problem serves as the security source of many lightweight/post-quantum cryptographic schemes proposed over the past decade. There are several algorithms available so far to fulfill the solving task. In this paper, we present further algorithmic improvements to the existing work. We describe the first efficient algorithm for the single-list kk-sum problem which naturally arises from the various BKW reduction settings, propose the hybrid mode of BKW reduction and show how to compute the matrix multiplication in the Gaussian elimination step with flexible and reduced time/memory complexities. The new algorithms yield the best known tradeoffs on the %time/memory/data complexity curve and clearly compromise the 8080-bit security of the LPN instances suggested in cryptographic schemes. Practical experiments on reduced LPN instances verified our results

    Bringing Theory Closer to Practice in Post-quantum and Leakage-resilient Cryptography

    Get PDF
    Modern cryptography pushed forward the need of having provable security. Whereas ancient cryptography was only relying on heuristic assumptions and the secrecy of the designs, nowadays researchers try to make the security of schemes to rely on mathematical problems which are believed hard to solve. When doing these proofs, the capabilities of potential adversaries are modeled formally. For instance, the black-box model assumes that an adversary does not learn anything from the inner-state of a construction. While this assumption makes sense in some practical scenarios, it was shown that one can sometimes learn some information by other means, e.g., by timing how long the computation take. In this thesis, we focus on two different areas of cryptography. In both parts, we take first a theoretical point of view to obtain a result. We try then to adapt our results so that they are easily usable for implementers and for researchers working in practical cryptography. In the first part of this thesis, we take a look at post-quantum cryptography, i.e., at cryptographic primitives that are believed secure even in the case (reasonably big) quantum computers are built. We introduce HELEN, a new public-key cryptosystem based on the hardness of the learning from parity with noise problem (LPN). To make our results more concrete, we suggest some practical instances which make the system easily implementable. As stated above, the design of cryptographic primitives usually relies on some well-studied hard problems. However, to suggest concrete parameters for these primitives, one needs to know the precise complexity of algorithms solving the underlying hard problem. In this thesis, we focus on two recent hard-problems that became very popular in post-quantum cryptography: the learning with error (LWE) and the learning with rounding problem (LWR). We introduce a new algorithm that solves both problems and provide a careful complexity analysis so that these problems can be used to construct practical cryptographic primitives. In the second part, we look at leakage-resilient cryptography which studies adversaries able to get some side-channel information from a cryptographic primitive. In the past, two main disjoint models were considered. The first one, the threshold probing model, assumes that the adversary can put a limited number of probes in a circuit. He then learns all the values going through these probes. This model was used mostly by theoreticians as it allows very elegant and convenient proofs. The second model, the noisy-leakage model, assumes that every component of the circuit leaks but that the observed signal is noisy. Typically, some Gaussian noise is added to it. According to experiments, this model depicts closely the real behaviour of circuits. Hence, this model is cherished by the practical cryptographic community. In this thesis, we show that making a proof in the first model implies a proof in the second model which unifies the two models and reconciles both communities. We then look at this result with a more practical point-of-view. We show how it can help in the process of evaluating the security of a chip based solely on the more standard mutual information metric
    • …
    corecore