137 research outputs found

    Collision Resistant Hashing from Sub-exponential Learning Parity with Noise

    Get PDF
    The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even asymmetric tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. Based on the recent work of Applebaum et al. (ITCS 2017), we introduce a general framework for constructing CRH from LPN for various parameter choices. We show that, just to mention a few notable ones, under any of the following hardness assumptions (for the two most common variants of LPN) 1) constant-noise LPN is 2n0.5+ϵ2^{n^{0.5+\epsilon}}-hard for any constant ϵ>0\epsilon>0; 2) constant-noise LPN is 2Ω(n/logn)2^{\Omega(n/\log n)}-hard given q=poly(n)q=poly(n) samples; 3) low-noise LPN (of noise rate 1/n1/\sqrt{n}) is 2Ω(n/logn)2^{\Omega(\sqrt{n}/\log n)}-hard given q=poly(n)q=poly(n) samples. there exists CRH functions with constant (or even poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN\rightarrowbSVP\rightarrowCRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE\rightarrowSIS\rightarrowCRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai\u27s CRH functions based on the Short Integer Solution (SIS) problem. Furthermore, under additional (arguably minimal) idealized assumptions such as small-domain random functions or random permutations (that trivially imply collision resistance), we still salvage a simple and elegant collision-resistance-preserving domain extender that is (asymptotically) more parallel and efficient than previously known. In particular, assume 2n0.5+ϵ2^{n^{0.5+\epsilon}}-hard constant-noise LPN or 2n0.25+ϵ2^{n^{0.25+\epsilon}}-hard low-noise LPN, we obtain a polynomially shrinking collision resistant hash function that evaluates in parallel only a single layer of small-domain random functions (or random permutations) and produces their XOR sum as output

    Low-Complexity Cryptographic Hash Functions

    Get PDF
    Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is collision resistant hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output

    Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN

    Get PDF
    Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90\u27s. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic primitives. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate log2nn\frac{log^2 n}{n} implies the quasi-polynomial hardness of LPN at the extremely high noise rate 1/21/poly(n)1/2-1/poly(n). It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19] and carry on the worst-case to average-case reduction for LPN, which is the main focus of this work. We first expand the underlying binary linear codes (of the worst-case NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant 0<c<10<c<1 the constant-noise LPN problem is (T=2Ω(n1c),ϵ=2Ω(nmin(c,1c)),q=2Ω(nmin(c,1c))T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{min(c,1-c)})},q=2^{\Omega(n^{min(c,1-c)})})-hard assuming that the NCP (on either code) at the low-noise rate τ=nc\tau=n^{-c} is (T2˘7=2Ω(τn)T\u27={2^{\Omega(\tau n)}}, ϵ2˘7=2Ω(τn)\epsilon\u27={2^{-\Omega(\tau n)}},m=2Ω(τn)m={2^{\Omega(\tau n)}})-hard in the worst case, where TT, ϵ\epsilon, qq and mm are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions would need constant-noise LPN with (T=2ω(n)T={2^{\omega(\sqrt{n})}}, ϵ2˘7=2ω(n)\epsilon\u27={2^{-\omega(\sqrt{n})}},q=2nq={2^{\sqrt{n}}})-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary ω(1)\omega(1) factor in the exponent) what is reducible from the worst-case NCP when c=0.5c= 0.5. We leave it as an open problem whether the gap can be closed or there is a separation in place

    Notes on Lattice-Based Cryptography

    Get PDF
    Asymmetrisk kryptering er avhengig av antakelsen om at noen beregningsproblemer er vanskelige å løse. I 1994 viste Peter Shor at de to mest brukte beregningsproblemene, nemlig det diskrete logaritmeproblemet og primtallsfaktorisering, ikke lenger er vanskelige å løse når man bruker en kvantedatamaskin. Siden den gang har forskere jobbet med å finne nye beregningsproblemer som er motstandsdyktige mot kvanteangrep for å erstatte disse to. Gitterbasert kryptografi er forskningsfeltet som bruker kryptografiske primitiver som involverer vanskelige problemer definert på gitter, for eksempel det korteste vektorproblemet og det nærmeste vektorproblemet. NTRU-kryptosystemet, publisert i 1998, var et av de første som ble introdusert på dette feltet. Problemet Learning With Error (LWE) ble introdusert i 2005 av Regev, og det regnes nå som et av de mest lovende beregningsproblemene som snart tas i bruk i stor skala. Å studere vanskelighetsgraden og å finne nye og raskere algoritmer som løser den, ble et ledende forskningstema innen kryptografi. Denne oppgaven inkluderer følgende bidrag til feltet: - En ikke-triviell reduksjon av Mersenne Low Hamming Combination Search Problem, det underliggende problemet med et NTRU-lignende kryptosystem, til Integer Linear Programming (ILP). Særlig finner vi en familie av svake nøkler. - En konkret sikkerhetsanalyse av Integer-RLWE, en vanskelig beregningsproblemvariant av LWE, introdusert av Gu Chunsheng. Vi formaliserer et meet-in-the-middle og et gitterbasert angrep for denne saken, og vi utnytter en svakhet ved parametervalget gitt av Gu, for å bygge et forbedret gitterbasert angrep. - En forbedring av Blum-Kalai-Wasserman-algoritmen for å løse LWE. Mer spesifikt, introduserer vi et nytt reduksjonstrinn og en ny gjetteprosedyre til algoritmen. Disse tillot oss å utvikle to implementeringer av algoritmen, som er i stand til å løse relativt store LWE-forekomster. Mens den første effektivt bare bruker RAM-minne og er fullt parallelliserbar, utnytter den andre en kombinasjon av RAM og disklagring for å overvinne minnebegrensningene gitt av RAM. - Vi fyller et tomrom i paringsbasert kryptografi. Dette ved å gi konkrete formler for å beregne hash-funksjon til G2, den andre gruppen i paringsdomenet, for Barreto-Lynn-Scott-familien av paringsvennlige elliptiske kurver.Public-key Cryptography relies on the assumption that some computational problems are hard to solve. In 1994, Peter Shor showed that the two most used computational problems, namely the Discrete Logarithm Problem and the Integer Factoring Problem, are not hard to solve anymore when using a quantum computer. Since then, researchers have worked on finding new computational problems that are resistant to quantum attacks to replace these two. Lattice-based Cryptography is the research field that employs cryptographic primitives involving hard problems defined on lattices, such as the Shortest Vector Problem and the Closest Vector Problem. The NTRU cryptosystem, published in 1998, was one of the first to be introduced in this field. The Learning With Error (LWE) problem was introduced in 2005 by Regev, and it is now considered one of the most promising computational problems to be employed on a large scale in the near future. Studying its hardness and finding new and faster algorithms that solve it became a leading research topic in Cryptology. This thesis includes the following contributions to the field: - A non-trivial reduction of the Mersenne Low Hamming Combination Search Problem, the underlying problem of an NTRU-like cryptosystem, to Integer Linear Programming (ILP). In particular, we find a family of weak keys. - A concrete security analysis of the Integer-RLWE, a hard computational problem variant of LWE introduced by Gu Chunsheng. We formalize a meet-in-the-middle attack and a lattice-based attack for this case, and we exploit a weakness of the parameters choice given by Gu to build an improved lattice-based attack. - An improvement of the Blum-Kalai-Wasserman algorithm to solve LWE. In particular, we introduce a new reduction step and a new guessing procedure to the algorithm. These allowed us to develop two implementations of the algorithm that are able to solve relatively large LWE instances. While the first one efficiently uses only RAM memory and is fully parallelizable, the second one exploits a combination of RAM and disk storage to overcome the memory limitations given by the RAM. - We fill a gap in Pairing-based Cryptography by providing concrete formulas to compute hash-maps to G2, the second group in the pairing domain, for the Barreto-Lynn-Scott family of pairing-friendly elliptic curves.Doktorgradsavhandlin

    Attacking post-quantum cryptography

    Get PDF

    Attacking post-quantum cryptography

    Get PDF

    New Constructions of Collapsing Hashes

    Get PDF
    Collapsing is a post-quantum strengthening of collision resistance, needed to lift many classical results to the quantum setting. Unfortunately, the only existing standard-model proofs of collapsing hashes require LWE. We construct the first collapsing hashes from the quantum hardness of any one of the following problems: - LPN in a variety of low noise or high-hardness regimes, essentially matching what is known for collision resistance from LPN. - Finding cycles on exponentially-large expander graphs, such as those arising from isogenies on elliptic curves. - The optimal hardness of finding collisions in *any* hash function. - The *polynomial* hardness of finding collisions, assuming a certain plausible regularity condition on the hash. As an immediate corollary, we obtain the first statistically hiding post-quantum commitments and post-quantum succinct arguments (of knowledge) under the same assumptions. Our results are obtained by a general theorem which shows how to construct a collapsing hash H2˘7H\u27 from a post-quantum collision-resistant hash function HH, regardless of whether or not HH itself is collapsing, assuming HH satisfies a certain regularity condition we call semi-regularity

    Asymptotically Efficient Lattice-Based Digital Signatures

    Get PDF
    We present a general framework that converts certain types of linear collision-resistant hash functions into one-time signatures. Our generic construction can be instantiated based on both general and ideal (e.g. cyclic) lattices, and the resulting signature schemes are provably secure based on the worst-case hardness of approximating the shortest vector (and other standard lattice problems) in the corresponding class of lattices to within a polynomial factor. When instantiated with ideal lattices, the time complexity of the signing and verification algorithms, as well as key and signature size is almost linear (up to poly-logarithmic factors) in the dimension n of the underlying lattice. Since no sub-exponential (in n) time algorithm is known to solve lattice problems in the worst case, even when restricted to ideal lattices, our construction gives a digital signature scheme with an essentially optimal performance/security trade-off

    Cryptographic Hashing From Strong One-Way Functions

    Get PDF
    Constructing collision-resistant hash families (CRHFs) from one-way functions is a long-standing open problem and source of frustration in theoretical cryptography. In fact, there are strong negative results: black-box separations from one-way functions that are 2(1o(1))n2^{-(1-o(1))n}-secure against polynomial time adversaries (Simon, EUROCRYPT \u2798) and even from indistinguishability obfuscation (Asharov and Segev, FOCS \u2715). In this work, we formulate a mild strengthening of exponentially secure one-way functions, and we construct CRHFs from such functions. Specifically, our security notion requires that every polynomial time algorithm has at most 2nω(log(n))2^{-n - \omega(\log(n))} probability of inverting two independent challenges. More generally, we consider the problem of simultaneously inverting kk functions f1,,fkf_1,\ldots, f_k, which we say constitute a ``one-way product function\u27\u27 (OWPF). We show that sufficiently hard OWPFs yield hash families that are multi-input correlation intractable (Canetti, Goldreich, and Halevi, STOC \u2798) with respect to all sparse (bounded arity) output relations. Additionally assuming indistinguishability obfuscation, we construct hash families that achieve a broader notion of correlation intractability, extending the recent work of Kalai, Rothblum, and Rothblum (CRYPTO \u2717). In particular, these families are sufficient to instantiate the Fiat-Shamir heuristic in the plain model for a natural class of interactive proofs. An interesting consequence of our results is a potential new avenue for bypassing black-box separations. In particular, proving (with necessarily non-black-box techniques) that parallel repetition amplifies the hardness of specific one-way functions -- for example, all one-way permutations -- suffices to directly bypass Simon\u27s impossibility result

    The Quantum Decoding Problem

    Full text link
    One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution problem to the Learning with Errors problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry that this reduction can be made more powerful by replacing the learning with errors problem with a quantum equivalent, where the errors are given in quantum superposition. In the context of codes, this can be adapted to a reduction from finding short codewords to a quantum decoding problem for random linear codes. We therefore consider in this paper the quantum decoding problem, where we are given a superposition of noisy versions of a codeword and we want to recover the corresponding codeword. When we measure the superposition, we get back the usual classical decoding problem for which the best known algorithms are in the constant rate and error-rate regime exponential in the codelength. However, we will show here that when the noise rate is small enough, then the quantum decoding problem can be solved in quantum polynomial time. Moreover, we also show that the problem can in principle be solved quantumly (albeit not efficiently) for noise rates for which the associated classical decoding problem cannot be solved at all for information theoretic reasons. We then revisit Regev's reduction in the context of codes. We show that using our algorithms for the quantum decoding problem in Regev's reduction matches the best known quantum algorithms for the short codeword problem. This shows in some sense the tightness of Regev's reduction when considering the quantum decoding problem and also paves the way for new quantum algorithms for the short codeword problem
    corecore