56 research outputs found

    Dissection-BKW

    Get PDF
    The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations. We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension kk in time 243klog⁡k2^{\frac 43\frac k{\log k}} and memory 223klog⁡k2^{\frac 23\frac k{\log k}}. Using the Dissection technique due to Dinur et al. (Crypto ’12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential. Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm

    The Asymptotic Complexity of Coded-BKW with Sieving Using Increasing Reduction Factors

    Full text link
    The Learning with Errors problem (LWE) is one of the main candidates for post-quantum cryptography. At Asiacrypt 2017, coded-BKW with sieving, an algorithm combining the Blum-Kalai-Wasserman algorithm (BKW) with lattice sieving techniques, was proposed. In this paper, we improve that algorithm by using different reduction factors in different steps of the sieving part of the algorithm. In the Regev setting, where q=n2q = n^2 and σ=n1.5/(2πlog⁥22n)\sigma = n^{1.5}/(\sqrt{2\pi}\log_2^2 n), the asymptotic complexity is 20.8917n2^{0.8917n}, improving the previously best complexity of 20.8927n2^{{0.8927n}}. When a quantum computer is assumed or the number of samples is limited, we get a similar level of improvement.Comment: Longer version of a paper to be presented at ISIT 2019. Updated after comments from the peer-review process. Includes an appendix with a proof of Theorem

    A Non-heuristic Approach to Time-space Tradeoffs and Optimizations for BKW

    Get PDF
    Blum, Kalai and Wasserman (JACM 2003) gave the first sub-exponential algorithm to solve the Learning Parity with Noise (LPN) problem. In particular, consider the LPN problem with constant noise ÎŒ=(1−γ)/2\mu=(1-\gamma)/2. The BKW solves it with space complexity 2(1+Ï”)nlog⁥n2^{\frac{(1+\epsilon)n}{\log n}} and time/sample complexity 2(1+Ï”)nlog⁥n⋅2O(n11+Ï”)2^{\frac{(1+\epsilon)n}{\log n}}\cdot 2^{O(n^{\frac{1}{1+\epsilon}})} for small constant ϔ→0+\epsilon\to 0^+. We propose a variant of the BKW by tweaking Wagner\u27s generalized birthday problem (Crypto 2002) and adapting the technique to a cc-ary tree structure. In summary, our algorithm achieves the following: (Time-space tradeoff). We obtain the same time-space tradeoffs for LPN and LWE as those given by Esser et al. (Crypto 2018), but without resorting to any heuristics. For any 2≀c∈N2\leq c\in\mathbb{N}, our algorithm solves the LPN problem with time/sample complexity 2log⁥c(1+Ï”)nlog⁥n⋅2O(n11+Ï”)2^{\frac{\log c(1+\epsilon)n}{\log n}}\cdot 2^{O(n^{\frac{1}{1+\epsilon}})} and space complexity 2log⁥c(1+Ï”)n(c−1)log⁥n2^{\frac{\log c(1+\epsilon)n}{(c-1)\log n}}, where one can use Grover\u27s quantum algorithm or Dinur et al.\u27s dissection technique (Crypto 2012) to further accelerate/optimize the time complexity. (Time/sample optimization). A further adjusted variant of our algorithm solves the LPN problem with sample, time and space complexities all kept at 2(1+Ï”)nlog⁥n2^{\frac{(1+\epsilon)n}{\log n}} for ϔ→0+\epsilon\to 0^+, saving factor 2Ω(n11+Ï”)2^{\Omega(n^{\frac{1}{1+\epsilon}})} in time/sample compared to the original BKW, and the variant of Devadas et al. (TCC 2017). This benefits from a careful analysis of the error distribution among the correlated candidates, and therefore avoids repeating the same process 2Ω(n11+Ï”)2^{\Omega(n^{\frac{1}{1+\epsilon}})} times on fresh new samples. (Sample reduction) Our algorithm provides an alternative to Lyubashevsky\u27s BKW variant (RANDOM 2005) for LPN with a restricted amount of samples. In particular, given Q=n1+Ï”Q=n^{1+\epsilon} (resp., Q=2nÏ”Q=2^{n^{\epsilon}}) samples, our algorithm saves a factor of 2Ω(n)/(log⁥n)1−Îș2^{\Omega(n)/(\log n)^{1-\kappa}} (resp., 2Ω(nÎș)2^{\Omega(n^{\kappa})}) for constant Îș→1−\kappa \to 1^- in running time while consuming roughly the same space, compared with Lyubashevsky\u27s algorithm. We seek to bridge the gaps between theoretical and heuristic LPN solvers, but take a different approach from Devadas et al. (TCC 2017). We exploit weak yet sufficient conditions (e.g., pairwise independence), and the analysis uses only elementary tools (e.g., Chebyshev\u27s inequality)

    Improvements on making BKW practical for solving LWE

    Get PDF
    The learning with errors (LWE) problem is one of the main mathematical foundations of post-quantum cryptography. One of the main groups of algorithms for solving LWE is the Blum–Kalai–Wasserman (BKW) algorithm. This paper presents new improvements of BKW-style algorithms for solving LWE instances. We target minimum concrete complexity, and we introduce a new reduction step where we partially reduce the last position in an iteration and finish the reduction in the next iteration, allowing non-integer step sizes. We also introduce a new procedure in the secret recovery by mapping the problem to binary problems and applying the fast Walsh Hadamard transform. The complexity of the resulting algorithm compares favorably with all other previous approaches, including lattice sieving. We additionally show the steps of implementing the approach for large LWE problem instances. We provide two implementations of the algorithm, one RAM-based approach that is optimized for speed, and one file-based approach which overcomes RAM limitations by using file-based storage.publishedVersio

    Notes on Lattice-Based Cryptography

    Get PDF
    Asymmetrisk kryptering er avhengig av antakelsen om at noen beregningsproblemer er vanskelige Ă„ lĂžse. I 1994 viste Peter Shor at de to mest brukte beregningsproblemene, nemlig det diskrete logaritmeproblemet og primtallsfaktorisering, ikke lenger er vanskelige Ă„ lĂžse nĂ„r man bruker en kvantedatamaskin. Siden den gang har forskere jobbet med Ă„ finne nye beregningsproblemer som er motstandsdyktige mot kvanteangrep for Ă„ erstatte disse to. Gitterbasert kryptografi er forskningsfeltet som bruker kryptografiske primitiver som involverer vanskelige problemer definert pĂ„ gitter, for eksempel det korteste vektorproblemet og det nĂŠrmeste vektorproblemet. NTRU-kryptosystemet, publisert i 1998, var et av de fĂžrste som ble introdusert pĂ„ dette feltet. Problemet Learning With Error (LWE) ble introdusert i 2005 av Regev, og det regnes nĂ„ som et av de mest lovende beregningsproblemene som snart tas i bruk i stor skala. Å studere vanskelighetsgraden og Ă„ finne nye og raskere algoritmer som lĂžser den, ble et ledende forskningstema innen kryptografi. Denne oppgaven inkluderer fĂžlgende bidrag til feltet: - En ikke-triviell reduksjon av Mersenne Low Hamming Combination Search Problem, det underliggende problemet med et NTRU-lignende kryptosystem, til Integer Linear Programming (ILP). SĂŠrlig finner vi en familie av svake nĂžkler. - En konkret sikkerhetsanalyse av Integer-RLWE, en vanskelig beregningsproblemvariant av LWE, introdusert av Gu Chunsheng. Vi formaliserer et meet-in-the-middle og et gitterbasert angrep for denne saken, og vi utnytter en svakhet ved parametervalget gitt av Gu, for Ă„ bygge et forbedret gitterbasert angrep. - En forbedring av Blum-Kalai-Wasserman-algoritmen for Ă„ lĂžse LWE. Mer spesifikt, introduserer vi et nytt reduksjonstrinn og en ny gjetteprosedyre til algoritmen. Disse tillot oss Ă„ utvikle to implementeringer av algoritmen, som er i stand til Ă„ lĂžse relativt store LWE-forekomster. Mens den fĂžrste effektivt bare bruker RAM-minne og er fullt parallelliserbar, utnytter den andre en kombinasjon av RAM og disklagring for Ă„ overvinne minnebegrensningene gitt av RAM. - Vi fyller et tomrom i paringsbasert kryptografi. Dette ved Ă„ gi konkrete formler for Ă„ beregne hash-funksjon til G2, den andre gruppen i paringsdomenet, for Barreto-Lynn-Scott-familien av paringsvennlige elliptiske kurver.Public-key Cryptography relies on the assumption that some computational problems are hard to solve. In 1994, Peter Shor showed that the two most used computational problems, namely the Discrete Logarithm Problem and the Integer Factoring Problem, are not hard to solve anymore when using a quantum computer. Since then, researchers have worked on finding new computational problems that are resistant to quantum attacks to replace these two. Lattice-based Cryptography is the research field that employs cryptographic primitives involving hard problems defined on lattices, such as the Shortest Vector Problem and the Closest Vector Problem. The NTRU cryptosystem, published in 1998, was one of the first to be introduced in this field. The Learning With Error (LWE) problem was introduced in 2005 by Regev, and it is now considered one of the most promising computational problems to be employed on a large scale in the near future. Studying its hardness and finding new and faster algorithms that solve it became a leading research topic in Cryptology. This thesis includes the following contributions to the field: - A non-trivial reduction of the Mersenne Low Hamming Combination Search Problem, the underlying problem of an NTRU-like cryptosystem, to Integer Linear Programming (ILP). In particular, we find a family of weak keys. - A concrete security analysis of the Integer-RLWE, a hard computational problem variant of LWE introduced by Gu Chunsheng. We formalize a meet-in-the-middle attack and a lattice-based attack for this case, and we exploit a weakness of the parameters choice given by Gu to build an improved lattice-based attack. - An improvement of the Blum-Kalai-Wasserman algorithm to solve LWE. In particular, we introduce a new reduction step and a new guessing procedure to the algorithm. These allowed us to develop two implementations of the algorithm that are able to solve relatively large LWE instances. While the first one efficiently uses only RAM memory and is fully parallelizable, the second one exploits a combination of RAM and disk storage to overcome the memory limitations given by the RAM. - We fill a gap in Pairing-based Cryptography by providing concrete formulas to compute hash-maps to G2, the second group in the pairing domain, for the Barreto-Lynn-Scott family of pairing-friendly elliptic curves.Doktorgradsavhandlin

    Practically Solving LPN in High Noise Regimes Faster Using Neural Networks

    Get PDF
    We conduct a systematic study of solving the learning parity with noise problem (LPN) using neural networks. Our main contribution is designing families of two-layer neural networks that practically outperform classical algorithms in high-noise, low-dimension regimes. We consider three settings where the numbers of LPN samples are abundant, very limited, and in between. In each setting we provide neural network models that solve LPN as fast as possible. For some settings we are also able to provide theories that explain the rationale of the design of our models. Comparing with the previous experiments of Esser, Kubler, and May (CRYPTO 2017), for dimension n=26n = 26, noise rate τ=0.498\tau = 0.498, the ''Guess-then-Gaussian-elimination'' algorithm takes 3.12 days on 64 CPU cores, whereas our neural network algorithm takes 66 minutes on 8 GPUs. Our algorithm can also be plugged into the hybrid algorithms for solving middle or large dimension LPN instances.Comment: 37 page

    Optimal Merging in Quantum k-xor and k-sum Algorithms

    Get PDF
    International audienceThe k-xor or Generalized Birthday Problem aims at finding, given k lists of bit-strings, a k-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper. If the lists are bounded (of the same size) and such that there is a single solution, the dissection algorithms of Dinur et al. (CRYPTO 2012) improve the memory usage over a simple meet-in-the-middle. In this paper, we study quantum algorithms for the k-xor problem. With unbounded lists and quantum access, we improve previous work by Grassi et al. (ASIACRYPT 2018) for almost all k. Next, we extend our study to lists of any size and with classical access only. We define a set of "merging trees" which represent the best known strategies for quantum and classical merging in k-xor algorithms, and prove that our method is optimal among these. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given k-xor problem. All our algorithms apply also when considering modular additions instead of bitwise xors. This framework enables us to give new improved quantum k-xor algorithms for all k and list sizes. Applications include the subset-sum problem, LPN with limited memory and the multiple-encryption problem

    An Algorithmic Framework for the Generalized Birthday Problem

    Get PDF
    The generalized birthday problem (GBP) was introduced by Wagner in 2002 and has shown to have many applications in cryptanalysis. In its typical variant, we are given access to a function H:{0,1}ℓ→{0,1}nH:\{0,1\}^{\ell} \rightarrow \{0,1\}^n (whose specification depends on the underlying problem) and an integer K>0K>0. The goal is to find KK distinct inputs to HH (denoted by {xi}i=1K\{x_i\}_{i=1}^{K}) such that ∑i=1KH(xi)=0\sum_{i=1}^{K}H(x_i) = 0. Wagner\u27s K-tree algorithm solves the problem in time and memory complexities of about N1/(⌊log⁥K⌋+1)N^{1/(\lfloor \log K \rfloor + 1)} (where N=2nN= 2^n). Two important open problems raised by Wagner were (1) devise efficient time-memory tradeoffs for GBP, and (2) reduce the complexity of the K-tree algorithm for KK which is not a power of 2. In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by Nikolić and Sasaki and also by Biryukov and Khovratovich) for all K≄8K \geq 8 from T2M⌊log⁥K⌋−1=NT^2M^{\lfloor \log K \rfloor -1} = N to T⌈(log⁥K)/2⌉+1M⌊(log⁥K)/2⌋=NT^{\lceil (\log K)/2 \rceil + 1 }M^{\lfloor (\log K)/2 \rfloor} = N, applicable for a large range of parameters. For example, for K=8K = 8 we improve the best previous tradeoff from T2M2=NT^2M^2 = N to T3M=NT^3M = N and for K=32K = 32 the improvement is from T2M4=NT^2M^4 = N to T4M2=NT^4M^2 = N. Next, we consider values of KK which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for K∈{6,7,14,15}K \in \{6,7,14,15\} we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for K=6K=6 the K-tree algorithm achieves T=M=N1/3T=M=N^{1/3}, whereas we obtain T=N1/3T=N^{1/3} and M=N1/6M=N^{1/6}. For K=14K=14, Wagner\u27s algorithm achieves T=M=N1/4T=M=N^{1/4}, while we obtain T=N1/4T=N^{1/4} and M=N1/8M=N^{1/8}. This gives the first significant improvement over the K-tree algorithm for small KK. Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art. Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms
    • 

    corecore