128 research outputs found

    On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?

    Get PDF

    Statistically Sender-Private OT from LPN and Derandomization

    Get PDF
    We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK

    New Records in Collision Attacks on RIPEMD-160 and SHA-256

    Get PDF
    RIPEMD-160 and SHA-256 are two hash functions used to generate the bitcoin address. In particular, RIPEMD-160 is an ISO/IEC standard and SHA-256 has been widely used in the world. Due to their complex designs, the progress to find (semi-free-start) collisions for the two hash functions is slow. Recently at EUROCRYPT 2023, Liu et al. presented the first collision attack on 36 steps of RIPEMD-160 and the first MILP-based method to find collision-generating signed differential characteristics. We continue this line of research and implement the MILP-based method with a SAT/SMT-based method. Furthermore, we observe that the collision attack on RIPEMD-160 can be improved to 40 steps with different message differences. We have practically found a colliding message pair for 40-step RIPEMD-160 in 16 hours with 115 threads. Moreover, we also report the first semi-free-start (SFS) colliding message pair for 39-step SHA-256, which can be found in about 3 hours with 120 threads. These results update the best (SFS) collision attacks on RIPEMD-160 and SHA-256. Especially, we have made some progress on SHA-256 since the last update on (SFS) collision attacks on it at EUROCRYPT 2013, where the first practical SFS collision attack on 38-step SHA-256 was found

    The Extended Autocorrelation and Boomerang Tables and Links Between Nonlinearity Properties of Vectorial Boolean Functions

    Get PDF
    Given the links between nonlinearity properties and the related tables such as LAT, DDT, BCT and ACT that have appeared in the literature, the boomerang connectivity table BCT seems to be an outlier as it cannot be derived from the others using Walsh-Hadamard transform. In this paper, a brief unified summary of the existing links for general vectorial Boolean functions is given first and then a link between the autocorrelation and boomerang connectivity tables is established

    On the Necessity of Collapsing for Post-Quantum and Quantum Commitments

    Get PDF
    Collapse binding and collapsing were proposed by Unruh (Eurocrypt \u2716) as post-quantum strengthenings of computational binding and collision resistance, respectively. These notions have been very successful in facilitating the "lifting" of classical security proofs to the quantum setting. A basic and natural question remains unanswered, however: are they the weakest notions that suffice for such lifting? In this work we answer this question in the affirmative by giving a classical commit-and-open protocol which is post-quantum secure if and only if the commitment scheme (resp. hash function) used is collapse binding (resp. collapsing). We also generalise the definition of collapse binding to quantum commitment schemes, and prove that the equivalence carries over when the sender in this commit-and-open protocol communicates quantum information. As a consequence, we establish that a variety of "weak" binding notions (sum binding, CDMS binding and unequivocality) are in fact equivalent to collapse binding, both for post-quantum and quantum commitments. Finally, we prove a "win-win" result, showing that a post-quantum computationally binding commitment scheme that is not collapse binding can be used to build an equivocal commitment scheme (which can, in turn, be used to build one-shot signatures and other useful quantum primitives). This strengthens a result due to Zhandry (Eurocrypt \u2719) showing that the same object yields quantum lightning

    Updatable Public Key Encryption in the Standard Model

    Get PDF
    Forward security (FS) ensures that corrupting the current secret key in the system preserves the privacy or integrity of the prior usages of the system. Achieving forward security is especially hard in the setting of public-key encryption (PKE), where time is divided into periods, and in each period the receiver derives the next-period secret key from their current secret key, while the public key stays constant. Indeed, all current constructions of FS-PKE are built from hierarchical identity-based encryption (HIBE) and are rather complicated. Motivated by applications to secure messaging, recent works of Jost et al. (Eurocrypt’19) and Alwen et al. (CRYPTO’20) consider a natural relaxation of FS-PKE, which they term updatable PKE (UPKE). In this setting, the transition to the next period can be initiated by any sender, who can compute a special update ciphertext. This ciphertext directly produces the next-period public key and can be processed by the receiver to compute the next-period secret key. If done honestly, future (regular) ciphertexts produced with the new public key can be decrypted with the new secret key, but past such ciphertexts cannot be decrypted with the new secret key. Moreover, this is true even if all other previous-period updates were initiated by untrusted senders. Both papers also constructed a very simple UPKE scheme based on the CDH assumption in the random oracle model. However, they left open the question of building such schemes in the standard model, or based on other (e.g., post-quantum) assumptions, without using the heavy HIBE techniques. In this work, we construct two efficient UPKE schemes in the standard model, based on the DDH and LWE assumptions, respectively. Somewhat interestingly, our constructions gain their efficiency (compared to prior FS-PKE schemes) by using tools from the area of circular-secure and leakage resilient public-key encryption schemes (rather than HIBE)

    Lattice Attacks on NTRU and LWE: A History of Refinements

    Get PDF
    Since its invention in 1982, the LLL lattice reduction algorithm (Lenstra, Lenstra, Lovasz 1982) has found countless applications. In cryptanalysis, the two most prominent applications of LLL and its generalisations --e.g. Slide, BKZ and SD-BKZ-- are factoring RSA keys with extra information on the secret key via Coppersmith\u27s method and the cryptanalysis of lattice-based schemes. After almost 40 years of cryptanalytic applications, predicting and optimising lattice reduction algorithms remains an active area of research. While we do have theorems bounding the worst-case performance of these algorithms, those bounds are asymptotic and not necessarily tight when applied to practical or even cryptographic instances. Reasoning about the behaviour of those algorithms relies on heuristics and approximations, some of which are known to fail for relevant corner cases. Decades after Lenstra, Lenstra, and Lovász gave birth to this fascinating and lively research area, this state of affairs became a more pressing issue recently. Motivated by post-quantum security, standardisation bodies, governments and industry started to move towards deploying lattice-based cryptographic algorithms. This spurred the refinement of those heuristics and approximations, leading to a better understanding of the behaviour of these algorithms over the last few years. Lattice reduction algorithms, such as LLL and BKZ, proceed with repeated local improvements to the lattice basis, and each such local improvement means solving the short(est) vector problem in a lattice of a smaller dimension. Therefore, two questions arise: how costly is it to find those local improvements and what is the global behaviour as those improvements are applied. While those two questions may not be perfectly independent, we will, in this survey, focus on the second one, namely, the global behaviour of such algorithms, given oracle access for finding local improvements. Our focus on the global behaviour is motivated by our intent to draw more of the community\u27s attention to this aspect. We will take a particular interest in the behaviour of such algorithms on a specific class of lattices, underlying the most popular lattice problems to build cryptographic primitives, namely the LWE problem and the NTRU problem. We will emphasise on the approximations that have been made, their progressive refinements and highlight open problems to be addressed

    Cascading Four Round LRW1 is Beyond Birthday Bound Secure

    Get PDF
    In CRYPTO\u2702, Liskov et al. have introduced a new symmetric key primitive called tweakable block cipher. They have proposed two constructions of designing a tweakable block cipher from block ciphers. The first proposed construction is called LRW1\mathsf{LRW1} and the second proposed construction is called LRW2\mathsf{LRW2}. Although, LRW2\mathsf{LRW2} has been extended in later works to provide beyond birthday bound security (e.g., cascaded LRW2\mathsf{LRW2} in CRYPTO\u2712 by Landecker et al.), but extension of the LRW1\mathsf{LRW1} has received no attention until the work of Bao et al. in EUROCRYPT\u2720, where the authors have shown that one round extension of LRW1\mathsf{LRW1}, i.e., masking the output of LRW1\mathsf{LRW1} with the given tweak and then re-encrypting it with the same block cipher, gives security up to 22n/32^{2n/3} queries. Recently, Khairallah has shown a birthday bound distinguishing attack on the construction and hence invalidated the security claim of Bao et al. This has led to the open research question, that {\em how many round are required for cascading LRW1\mathsf{LRW1} to achieve beyond birthday bound security ?} In this paper, we have shown that cascading LRW1\mathsf{LRW1} up to four rounds is sufficient for ensuring beyond the birthday bound security. In particular, we have shown that CLRW14\mathsf{CLRW1}^4 provides security up to 23n/42^{3n/4} queries. Security analysis of our construction is based on the recent development of the mirror theory technique for tweakable random permutations under the framework of the Expectation Method

    Improvements on making BKW practical for solving LWE

    Get PDF
    The learning with errors (LWE) problem is one of the main mathematical foundations of post-quantum cryptography. One of the main groups of algorithms for solving LWE is the Blum–Kalai–Wasserman (BKW) algorithm. This paper presents new improvements of BKW-style algorithms for solving LWE instances. We target minimum concrete complexity, and we introduce a new reduction step where we partially reduce the last position in an iteration and finish the reduction in the next iteration, allowing non-integer step sizes. We also introduce a new procedure in the secret recovery by mapping the problem to binary problems and applying the fast Walsh Hadamard transform. The complexity of the resulting algorithm compares favorably with all other previous approaches, including lattice sieving. We additionally show the steps of implementing the approach for large LWE problem instances. We provide two implementations of the algorithm, one RAM-based approach that is optimized for speed, and one file-based approach which overcomes RAM limitations by using file-based storage.publishedVersio

    Fully Secure Attribute-Based Encryption for tt-CNF from LWE

    Get PDF
    Attribute-based Encryption (ABE), first introduced by [SW05,GPSW06], is a public key encryption system that can support multiple users with varying decryption permissions. One of the main properties of such schemes is the supported function class of policies. While there are fully secure constructions from bilinear maps for a fairly large class of policies, the situation with lattice-based constructions is less satisfactory and many efforts were made to close this gap. Prior to this work the only known fully secure lattice construction was for the class of point functions (also known as IBE). In this work we construct for the first time a lattice-based (ciphertext-policy) ABE scheme for the function class tt-CNF, which consists of CNF formulas where each clause depends on at most tt bits of the input, for any constant tt. This class includes NP-verification policies, bit-fixing policies and tt-threshold policies. Towards this goal we also construct a fully secure single-key constrained PRF from OWF for the same function class, which might be of independent interest
    • …
    corecore