130 research outputs found
On Iterative Collision Search for LPN and Subset Sum
Iterative collision search procedures play a key role in developing combinatorial algorithms for the subset sum and learning parity with noise (LPN) problems.
In both scenarios, the single-list pair-wise iterative collision search finds the most solutions and offers the best efficiency.
However, due to its complex probabilistic structure, no rigorous analysis for it appears to be available to the best of our knowledge.
As a result, theoretical works often resort to overly constrained and sub-optimal iterative collision search variants in exchange for analytic simplicity.
In this paper, we present rigorous analysis for the single-list pair-wise iterative collision search method and its applications in subset sum and LPN.
In the LPN literature, the method is known as the LF2 heuristic.
Besides LF2, we also present rigorous analysis of other LPN solving heuristics and show that they work well when combined with LF2.
Putting it together, we significantly narrow the gap between theoretical and heuristic algorithms for LPN
Some Notes on Code-Based Cryptography
This thesis presents new cryptanalytic results in several areas of coding-based cryptography. In addition, we also investigate the possibility of using convolutional codes in code-based public-key cryptography. The first algorithm that we present is an information-set decoding algorithm, aiming towards the problem of decoding random linear codes. We apply the generalized birthday technique to information-set decoding, improving the computational complexity over previous approaches. Next, we present a new version of the McEliece public-key cryptosystem based on convolutional codes. The original construction uses Goppa codes, which is an algebraic code family admitting a well-defined code structure. In the two constructions proposed, large parts of randomly generated parity checks are used. By increasing the entropy of the generator matrix, this presumably makes structured attacks more difficult. Following this, we analyze a McEliece variant based on quasi-cylic MDPC codes. We show that when the underlying code construction has an even dimension, the system is susceptible to, what we call, a squaring attack. Our results show that the new squaring attack allows for great complexity improvements over previous attacks on this particular McEliece construction. Then, we introduce two new techniques for finding low-weight polynomial multiples. Firstly, we propose a general technique based on a reduction to the minimum-distance problem in coding, which increases the multiplicity of the low-weight codeword by extending the code. We use this algorithm to break some of the instances used by the TCHo cryptosystem. Secondly, we propose an algorithm for finding weight-4 polynomials. By using the generalized birthday technique in conjunction with increasing the multiplicity of the low-weight polynomial multiple, we obtain a much better complexity than previously known algorithms. Lastly, two new algorithms for the learning parities with noise (LPN) problem are proposed. The first one is a general algorithm, applicable to any instance of LPN. The algorithm performs favorably compared to previously known algorithms, breaking the 80-bit security of the widely used (512,1/8) instance. The second one focuses on LPN instances over a polynomial ring, when the generator polynomial is reducible. Using the algorithm, we break an 80-bit security instance of the Lapin cryptosystem
Statistical Decoding 2.0: Reducing Decoding to LPN
International audienceThe security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decoders at code rates smaller than 0.3. It gives for the first time after 60 years, a better decoding algorithm for a significant range which does not belong to the ISD family
Improved Fast Correlation Attacks on the Sosemanuk Stream Cipher
In this paper, we present a new algorithm for fast correlation attacks on stream ciphers with improved cryptanalysis results on the Sosemanuk stream cipher, one of the 7 finalists in the eSTREAM project in 2008. The new algorithm exploits the direct sum construction of covering codes in decoding phase which approximates the random vectors to a nearest codeword in a linear code. The new strategy provides large flexibility for the adversary and could reduce the time/memory/data complexities significantly. As a case study, we carefully revisit Sosemanuk and demonstrate a state recovery attack with a time complexity of 2134.8, which is 220 times faster than achievable before by the same kind of attack and is the fastest one among all known attacks so far. Our result indicates an inefficiency in longer keys than 135 bits and depicts that the security margin of Sosemanuk is around 28 for the 128-bit security for the first time
A Non-heuristic Approach to Time-space Tradeoffs and Optimizations for BKW
Blum, Kalai and Wasserman (JACM 2003) gave the first sub-exponential algorithm to solve the Learning Parity with Noise (LPN) problem. In particular, consider the LPN problem with constant noise . The BKW solves it with space complexity and time/sample complexity for small constant . We propose a variant of the BKW by tweaking Wagner\u27s generalized birthday problem (Crypto 2002) and adapting the technique to a -ary tree structure. In summary, our algorithm achieves the following:
(Time-space tradeoff). We obtain the same time-space tradeoffs for LPN and LWE as those given by Esser et al. (Crypto 2018), but without resorting to any heuristics. For any , our algorithm solves the LPN problem with time/sample complexity and space complexity , where one can use Grover\u27s quantum algorithm or Dinur et al.\u27s dissection technique (Crypto 2012) to further accelerate/optimize the time complexity.
(Time/sample optimization). A further adjusted variant of our algorithm solves the LPN problem with sample, time and space complexities all kept at for , saving factor in time/sample compared to the original BKW, and the variant of Devadas et al. (TCC 2017). This benefits from a careful analysis of the error distribution among the correlated candidates, and therefore avoids repeating the same process times on fresh new samples.
(Sample reduction) Our algorithm provides an alternative to Lyubashevsky\u27s BKW variant (RANDOM 2005) for LPN with a restricted amount of samples. In particular, given (resp., ) samples, our
algorithm saves a factor of (resp., ) for constant in running time while consuming roughly the same space, compared with Lyubashevsky\u27s algorithm.
We seek to bridge the gaps between theoretical and heuristic LPN solvers, but take a different approach from Devadas et al. (TCC 2017). We exploit weak yet sufficient conditions (e.g., pairwise independence), and the analysis uses only elementary tools (e.g., Chebyshev\u27s inequality)
Dissection-BKW
The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations.
We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension in time and memory . Using the Dissection technique due to Dinur et al. (Crypto ’12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.
Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm
Asymptotics and Improvements of Sieving for Codes
A recent work by Guo, Johansson, and Nguyen (Eprint\u2723) proposes a promising adaptation of Sieving techniques from lattices to codes, in particular, by claiming concrete cryptanalytic improvements on various schemes. The core of their algorithm reduces to a Near Neighbor Search (NNS) problem, for which they devise an ad-hoc approach. In this work, we aim for a better theoretical understanding of this approach. First, we provide an asymptotic analysis which is not present in the original paper. Second, we propose a more systematic use of well-established NNS machinery, known as Locality Sensitive Hashing and Filtering (LSH/F). LSH/F is an approach that has been applied very successfully in the case of sieving over lattices. We thus establish the first baseline for the sieving approach with a decoding complexity of for the conventional worst parameters (full distance decoding, where complexity is maximized over all code rates). Our cumulative improvements eventually enable us to lower the hardest parameter decoding complexity for SievingISD algorithms to . This approach outperforms the BJMM algorithm (Eurocrypt\u2712) but falls behind the most advanced conventional ISD approach by Both and May (PQCrypto\u2718). As for lattices, we found the Random-Spherical-Code-Product (RPC) to give the best asymptotic complexity. Moreover, we also consider an alternative that seems specific to the Hamming Sphere, which we believe could be of practical interest as it plausibly hides less sub-exponential overheads than RPC
Correlated Pseudorandomness from the Hardness of Quasi-Abelian Decoding
Secure computation often benefits from the use of correlated randomness to
achieve fast, non-cryptographic online protocols. A recent paradigm put forth
by Boyle (CCS 2018, Crypto 2019) showed how pseudorandom
correlation generators (PCG) can be used to generate large amounts of useful
forms of correlated (pseudo)randomness, using minimal interactions followed
solely by local computations, yielding silent secure two-party computation
protocols (protocols where the preprocessing phase requires almost no
communication). An additional property called programmability allows to extend
this to build N-party protocols. However, known constructions for programmable
PCG's can only produce OLE's over large fields, and use rather new splittable
Ring-LPN assumption.
In this work, we overcome both limitations. To this end, we introduce the
quasi-abelian syndrome decoding problem (QA-SD), a family of assumptions which
generalises the well-established quasi-cyclic syndrome decoding assumption.
Building upon QA-SD, we construct new programmable PCG's for OLE's over any
field with . Our analysis also sheds light on the security
of the ring-LPN assumption used in Boyle (Crypto 2020). Using
our new PCG's, we obtain the first efficient N-party silent secure computation
protocols for computing general arithmetic circuit over for any
.Comment: This is a long version of a paper accepted at CRYPTO'2
The Hardness of LPN over Any Integer Ring and Field for PCG Applications
Learning parity with noise (LPN) has been widely studied and used in cryptography. It was recently brought to new prosperity since Boyle et al. (CCS\u2718), putting LPN to a central role in designing secure multi-party computation, zero-knowledge proofs, private set intersection, and many other protocols. In this paper, we thoroughly studied the security of LPN problems in this particular context. We found that some important aspects have long been ignored and many conclusions from classical LPN cryptanalysis do not apply to this new setting, due to the low noise rates, extremely high dimensions, various types (in addition to ) and noise distributions.
1. For LPN over a field, we give a parameterized reduction from exact-noise LPN to regular-noise LPN. Compared to the recent result by Feneuil, Joux and Rivain (Crypto\u2722), we significantly reduce the security loss by paying only a small additive price in dimension and number of samples.
2. We analyze the security of LPN over a ring . Existing protocols based on LPN over integer rings use parameters as if they are over fields, but we found an attack that effectively reduces the weight of a noise by half compared to LPN over fields. Consequently, prior works that use LPN over overestimate up to 40 bits of security.
3. We provide a complete picture of the hardness of LPN over integer rings by showing: 1) the equivalence between its search and decisional versions; 2) an efficient reduction from LPN over to LPN over ; and 3) generalization of our results to any integer ring.
Finally, we provide an all-in-one estimator tool for the bit security of LPN parameters in the context of PCG, incorporating the recent advanced attacks
Security analysis of the Classic McEliece, HQC and BIKE schemes in low memory
With the advancement of NIST PQC standardization, three of the four candidates in Round 4 are code-based schemes, namely Classic McEliece, HQC and BIKE. Currently, one of the most important tasks is to further analyze their security levels for the suggested parameter sets. At PKC 2022 Esser and Bellini restated the major information set decoding (ISD) algorithms by using nearest neighbor search and then applied these ISD algorithms to estimate the bit security of Classic McEliece, HQC and BIKE under the suggested parameter sets. However, all major ISD algorithms consume a large amount of memory, which in turn affects their time complexities. In this paper, we reestimate the bit-security levels of the parameter sets suggested by these three schemes in low memory by applying -list sum algorithms to ISD algorithms. Compared with Esser-Bellini\u27s results, our results achieve the best gains for Classic McEliece, HQC, and BIKE, with reductions in bit-security levels of , , and bits, respectively
- …