23 research outputs found

    Salvaging Merkle-Damgard for Practical Applications

    Get PDF
    Many cryptographic applications of hash functions are analyzed in the random oracle model. Unfortunately, most concrete hash functions, including the SHA family, use the iterative (strengthened) Merkle-Damgard transform applied to a corresponding compression function. Moreover, it is well known that the resulting ``structured\u27\u27 hash function cannot be generically used as a random oracle, even if the compression function is assumed to be ideal. This leaves a large disconnect between theory and practice: although no attack is known for many concrete applications utilizing existing (Merkle-Damgard-based) hash functions, there is no security guarantee either, even by idealizing the compression function. Motivated by this question, we initiate a rigorous and modular study of finding kinds of (still idealized) hash functions which would be (a) elegant and interesting in their own right; (b) still enough to argue security of important applications; and (c) provably instantiable by the (strengthened) Merkle-Damgard transform, applied to a strong enough compression function. We develop two such notions which we believe are natural and interesting in their own right: preimage awareness and being indifferentiable from a public-use random oracle

    Optimization of Tree Modes for Parallel Hash Functions: A Case Study

    Full text link
    This paper focuses on parallel hash functions based on tree modes of operation for an inner Variable-Input-Length function. This inner function can be either a single-block-length (SBL) and prefix-free MD hash function, or a sponge-based hash function. We discuss the various forms of optimality that can be obtained when designing parallel hash functions based on trees where all leaves have the same depth. The first result is a scheme which optimizes the tree topology in order to decrease the running time. Then, without affecting the optimal running time we show that we can slightly change the corresponding tree topology so as to minimize the number of required processors as well. Consequently, the resulting scheme decreases in the first place the running time and in the second place the number of required processors.Comment: Preprint version. Added citations, IEEE Transactions on Computers, 201

    An Analysis of the Blockcipher-Based Hash Functions from PGV

    Get PDF
    Preneel, Govaerts, and Vandewalle (1993) considered the 64 most basic ways to construct a hash function H: {0, 1}*->{0, 1}(n) from a blockcipher E: {0, 1}(n) x {0, 1}(n)->{0,1}(n). They regarded 12 of these 64 schemes as secure, though no proofs or formal claims were given. Here we provide a proof-based treatment of the PGV schemes. We show that, in the ideal-cipher model, the 12 schemes considered secure by PGV really are secure: we give tight upper and lower bounds on their collision resistance. Furthermore, by stepping outside of the Merkle-Damgard approach to analysis, we show that an additional 8 of the PGV schemes are just as collision resistant (up to a constant). Nonetheless, we are able to differentiate among the 20 collision-resistant schemes by considering their preimage resistance: only the 12 initial schemes enjoy optimal preimage resistance. Our work demonstrates that proving ideal-cipher-model bounds is a feasible and useful step for understanding the security of blockcipher-based hash-function constructions

    Some Cryptanalytic Results on Zipper Hash and Concatenated Hash

    Get PDF
    At SAC 2006, Liskov proposed the zipper hash, a technique for constructing secure (indifferentiable from random oracles) hash functions based on weak (invertible) compression functions. Zipper hash is a two pass scheme, which makes it unfit for practical consideration. But, from the theoretical point of view it seemed to be secure, as it had resisted standard attacks for long. Recently, Andreeva {\em et al.} gave a forced-suffix herding attack on the zipper hash, and Chen and Jin showed a second preimage attack provided f1f_1 is strong invertible. In this paper, we analyse the construction under the random oracle model as well as when the underlying compression functions have some weakness. We show (second) preimage, and herding attacks on an nn-bit zipper hash and its relaxed variant with f1=f2f_1 = f_2, all of which require less than 2n 2^{n} online computations. Hoch and Shamir have shown that the concatenated hash offers only n2\frac{n}{2}-bits security when both the underlying compression functions are strong invertible. We show that the bound is tight even when only one of the underlying compression functions is strong invertible

    Integrated-Key Cryptographic Hash Functions

    Get PDF
    Cryptographic hash functions have always played a major role in most cryptographic applications. Traditionally, hash functions were designed in the keyless setting, where a hash function accepts a variable-length message and returns a fixed-length fingerprint. Unfortunately, over the years, significant weaknesses were reported on instances of some popular ``keyless" hash functions. This has motivated the research community to start considering the dedicated-key setting, where a hash function is publicly keyed. In this approach, families of hash functions are constructed such that the individual members are indexed by different publicly-known keys. This has, evidently, also allowed for more rigorous security arguments. However, it turns out that converting an existing keyless hash function into a dedicated-key one is usually non-trivial since the underlying keyless compression function of the keyless hash function does not normally accommodate the extra key input. In this thesis we define and formalise a flexible approach to solve this problem. Hash functions adopting our approach are said to be constructed in the integrated-key setting, where keyless hash functions are seamlessly and transparently transformed into keyed variants by introducing an extra component accompanying the (still keyless) compression function to handle the key input separately outside the compression function. We also propose several integrated-key constructions and prove that they are collision resistant, pre-image resistant, 2nd pre-image resistant, indifferentiable from Random Oracle (RO), indistinguishable from Pseudorandom Functions (PRFs) and Unforgeable when instantiated as Message Authentication Codes (MACs) in the private key setting. We further prove that hash functions constructed in the integrated-key setting are indistinguishable from their variants in the conventional dedicated-key setting, which implies that proofs from the dedicated-key setting can be naturally reduced to the integrated-key setting.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Provable Security of BLAKE with Non-Ideal Compression Function

    Get PDF
    We analyze the security of the SHA-3 finalist BLAKE. The BLAKE hash function follows the HAIFA design methodology, and as such it achieves optimal preimage, second preimage and collision resistance, and is indifferentiable from a random oracle up to approximately 2^{n/2} assuming the underlying compression function is ideal. In our work we show, however, that the compression function employed by BLAKE exhibits a non-random behavior and is in fact differentiable in only 2^{n/4} queries. Our attack on the indifferentiability of the BLAKE compression function seriously undermines the security strength of BLAKE not only with respect to its overall indifferentiability, but also its collision and (second) preimage security in the ideal model. Our next contribution is the restoration of the security results for BLAKE in the ideal model by refining the level of modularity and assuming that BLAKE\u27s underlying block cipher is an ideal cipher. We prove that BLAKE is optimally collision, second preimage, and preimage secure (up to a constant). We go on to show that BLAKE is still indifferentiable from a random oracle up to the old bound of 2^{n/2} queries, albeit under a weaker assumption: the ideality of its block cipher

    Block-Cipher-Based Tree Hashing

    Get PDF
    First of all we take a thorough look at an error in a paper by Daemen et al. (ToSC 2018) which looks at minimal requirements for tree-based hashing based on multiple primitives, including block ciphers. This reveals that the error is more fundamental than previously shown by Gunsing et al. (ToSC 2020), which is mainly interested in its effect on the security bounds. It turns out that the cause for the error is due to an essential oversight in the interaction between the different oracles used in the indifferentiability proofs. In essence, it reduces the claim from the normal indifferentiability setting to the weaker sequential indifferentiability one. As a matter of fact, this error appeared in multiple earlier indifferentiability papers, including the optimal indifferentiability of the sum of permutations (EUROCRYPT 2018) and the recent ABR+ construction (EUROCRYPT 2021). We discuss in detail how this oversight is caused and how it can be avoided. We next demonstrate how the negative effects on the security bound of the construction by Daemen et al. can be resolved. Instead of only allowing a truncated output, we generalize the construction to allow for any finalization function and investigate the security of this for five different types of finalization. Our findings, among others, show that the security of the SHA-2 mode does not degrade if the feed-forward is dropped and that the modern BLAKE3 construction is secure in principle but that its use of the extendable output requires its counter used for random access to be public. Finally, we introduce the tree sponge, a generalization of the sequential sponge construction with parallel absorbing and squeezing

    How to Build a Hash Function from any Collision-Resistant Function

    Get PDF
    Recent collision-finding attacks against hash functions such as MD5 and SHA-1 motivate the use of provably collision-resistant (CR) functions in their place. Finding a collision in a provably CR function implies the ability to solve some hard problem (e.g., factoring). Unfortunately, existing provably CR functions make poor replacements for hash functions as they fail to deliver behaviors demanded by practical use. In particular, they are easily distinguished from a random oracle. We initiate an investigation into building hhash functions from provably CR functions. As a method for achieving this, we present the Mix-Compress-Mix (MCM) construction; it envelopes any provably CR function H (with suitable regularity properties) between two injective ``mixing\u27\u27 stages. The MCM construction simultaneously enjoys (1) provable collision-resistance in the standard model, and (2) indifferentiability from a monolithic random oracle when the mixing stages themselves are indifferentiable from a random oracle that observes injectivity. We instantiate our new design approach by specifying a blockcipher-based construction that appropriately realizes the mixing stages

    Collision Resistant Hashing from Sub-exponential Learning Parity with Noise

    Get PDF
    The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even asymmetric tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. Based on the recent work of Applebaum et al. (ITCS 2017), we introduce a general framework for constructing CRH from LPN for various parameter choices. We show that, just to mention a few notable ones, under any of the following hardness assumptions (for the two most common variants of LPN) 1) constant-noise LPN is 2n0.5+ϵ2^{n^{0.5+\epsilon}}-hard for any constant ϵ>0\epsilon>0; 2) constant-noise LPN is 2Ω(n/logn)2^{\Omega(n/\log n)}-hard given q=poly(n)q=poly(n) samples; 3) low-noise LPN (of noise rate 1/n1/\sqrt{n}) is 2Ω(n/logn)2^{\Omega(\sqrt{n}/\log n)}-hard given q=poly(n)q=poly(n) samples. there exists CRH functions with constant (or even poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN\rightarrowbSVP\rightarrowCRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE\rightarrowSIS\rightarrowCRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai\u27s CRH functions based on the Short Integer Solution (SIS) problem. Furthermore, under additional (arguably minimal) idealized assumptions such as small-domain random functions or random permutations (that trivially imply collision resistance), we still salvage a simple and elegant collision-resistance-preserving domain extender that is (asymptotically) more parallel and efficient than previously known. In particular, assume 2n0.5+ϵ2^{n^{0.5+\epsilon}}-hard constant-noise LPN or 2n0.25+ϵ2^{n^{0.25+\epsilon}}-hard low-noise LPN, we obtain a polynomially shrinking collision resistant hash function that evaluates in parallel only a single layer of small-domain random functions (or random permutations) and produces their XOR sum as output
    corecore