197 research outputs found

    On hashing with tweakable ciphers

    Get PDF
    Cryptographic hash functions are often built on block ciphers in order to reduce the security analysis of the hash to that of the cipher, and to minimize the hardware size. Well known hash constructs are used in international standards like MD5 and SHA-1. Recently, researchers proposed new modes of operations for hash functions to protect against generic attacks, and it remains open how to base such functions on block ciphers. An attracting and intuitive choice is to combine previous constructions with tweakable block ciphers. We investigate such constructions, and show the surprising result that combining a provably secure mode of operation with a provably secure tweakable cipher does not guarantee the security of the constructed hash function. In fact, simple attacks can be possible when the interaction between secure components leaves some additional "freedom" to an adversary. Our techniques are derived from the principle of slide attacks, which were introduced for attacking block ciphers

    Efficient Hashing Using the AES Instruction Set

    Get PDF
    In this work, we provide a software benchmark for a large range of 256-bit blockcipher-based hash functions. We instantiate the underlying blockcipher with AES, which allows us to exploit the recent AES instruction set (AESNI). Since AES itself only outputs 128 bits, we consider double-block-length constructions, as well as (single-block-length) constructions based on RIJNDAEL-256. Although we primarily target architectures supporting AES-NI, our framework has much broader applications by estimating the performance of these hash functions on any (micro-)architecture given AES-benchmark results. As far as we are aware, this is the first comprehensive performance comparison of multiblock- length hash functions in software

    A Pseudorandom-Function Mode Based on Lesamnta-LW and the MDP Domain Extension and Its Applications

    Get PDF
    This paper discusses a mode for pseudorandom functions (PRFs) based on the hashing mode of Lesamnta-LW and the domain extension called Merkle-DamgÄrd with permutation (MDP). The hashing mode of Lesamnta-LW is a plain Merkle-DamgÄrd iteration of a block cipher with its key size half of its block size. First, a PRF mode is presented which produces multiple independent PRFs with multiple permutations and initialization vectors if the underlying block cipher is a PRP. Then, two applications of the PRF mode are presented. One is a PRF with minimum padding. Here, padding is said to be minimum if the produced message blocks do not include message blocks only with the padded sequence for any non-empty input message. The other is a vector-input PRF using the PRFs with minimum padding.This work was supported in part by JSPS KAKENHI GrantNumber JP16H02828.IEICE Transactions Online TOP (https://search.ieice.org/

    Preimage resistance beyond the birthday bound: Double-length hashing revisited

    Get PDF
    Security proofs are an essential part of modern cryptography. Often the challenge is not to come up with appropriate schemes but rather to technically prove that these satisfy the desired security properties. We provide for the first time techniques for proving asymptotically optimal preimage resistance bounds for block cipher based double length, double call hash functions. More precisely, we consider for some \keylength>\blocklength compression functions H:\{0,1\}^{\keylength+\blocklength} \rightarrow \{0,1\}^{2\blocklength} using two calls to an ideal block cipher with an \blocklength-bit block size. Optimally, an adversary trying to find a preimage for HH should require \Omega(2^{2\blocklength}) queries to the underlying block cipher. As a matter of fact there have been several attempts to prove the preimage resistance of such compression functions, but no proof did go beyond the \Omega(2^{\blocklength}) barrier, therefore leaving a huge gap when compared to the optimal bound. In this paper, we introduce two new techniques on how to lift this bound to \Omega(2^{2\blocklength}). We demonstrate our new techniques for a simple and natural design of HH, being the concatenation of two instances of the well-known Davies-Meyer compression function

    Attacks On a Double Length Blockcipher-based Hash Proposal

    Get PDF
    In this paper we attack a 2n2n-bit double length hash function proposed by Lee et al. This proposal is a blockcipher-based hash function with hash rate 2/32/3. The designers claimed that it could achieve ideal collision resistance and gave a security proof. However, we find a collision attack with complexity of Ω(23n/4)\Omega(2^{3n/4}) and a preimage attack with complexity of Ω(2n)\Omega(2^{n}). Our result shows this construction is much worse than an ideal 2n2n-bit hash function

    Construction of secure and fast hash functions using nonbinary error-correcting codes

    Get PDF

    An Analysis of the Blockcipher-Based Hash Functions from PGV

    Get PDF
    Preneel, Govaerts, and Vandewalle (1993) considered the 64 most basic ways to construct a hash function H: {0, 1}*->{0, 1}(n) from a blockcipher E: {0, 1}(n) x {0, 1}(n)->{0,1}(n). They regarded 12 of these 64 schemes as secure, though no proofs or formal claims were given. Here we provide a proof-based treatment of the PGV schemes. We show that, in the ideal-cipher model, the 12 schemes considered secure by PGV really are secure: we give tight upper and lower bounds on their collision resistance. Furthermore, by stepping outside of the Merkle-Damgard approach to analysis, we show that an additional 8 of the PGV schemes are just as collision resistant (up to a constant). Nonetheless, we are able to differentiate among the 20 collision-resistant schemes by considering their preimage resistance: only the 12 initial schemes enjoy optimal preimage resistance. Our work demonstrates that proving ideal-cipher-model bounds is a feasible and useful step for understanding the security of blockcipher-based hash-function constructions

    Design and Analysis of Multi-Block-Length Hash Functions

    Get PDF
    Cryptographic hash functions are used in many cryptographic applications, and the design of provably secure hash functions (relative to various security notions) is an active area of research. Most of the currently existing hash functions use the Merkle-DamgĂ„rd paradigm, where by appropriate iteration the hash function inherits its collision and preimage resistance from the underlying compression function. Compression functions can either be constructed from scratch or be built using well-known cryptographic primitives such as a blockcipher. One classic type of primitive-based compression functions is single-block-length : It contains designs that have an output size matching the output length n of the underlying primitive. The single-block-length setting is well-understood. Yet even for the optimally secure constructions, the (time) complexity of collision- and preimage-finding attacks is at most 2n/2, respectively 2n ; when n = 128 (e.g., Advanced Encryption Standard) the resulting bounds have been deemed unacceptable for current practice. As a remedy, multi-block-length primitive-based compression functions, which output more than n bits, have been proposed. This output expansion is typically achieved by calling the primitive multiple times and then combining the resulting primitive outputs in some clever way. In this thesis, we study the collision and preimage resistance of certain types of multi-call multi-block-length primitive-based compression (and the corresponding Merkle-DamgĂ„rd iterated hash) functions : Our contribution is three-fold. First, we provide a novel framework for blockcipher-based compression functions that compress 3n bits to 2n bits and that use two calls to a 2n-bit key blockcipher with block-length n. We restrict ourselves to two parallel calls and analyze the sufficient conditions to obtain close-to-optimal collision resistance, either in the compression function or in the Merkle-DamgĂ„rd iteration. Second, we present a new compression function h: {0,1}3n → {0,1}2n ; it uses two parallel calls to an ideal primitive (public random function) from 2n to n bits. This is similar to MDC-2 or the recently proposed MJH by Lee and Stam (CT-RSA'11). However, unlike these constructions, already in the compression function we achieve that an adversary limited (asymptotically in n) to O (22n(1-ÎŽ)/3) queries (for any ÎŽ > 0) has a disappearing advantage to find collisions. This is the first construction of this type offering collision resistance beyond 2n/2 queries. Our final contribution is the (re)analysis of the preimage and collision resistance of the Knudsen-Preneel compression functions in the setting of public random functions. Knudsen-Preneel compression functions utilize an [r,k,d] linear error-correcting code over 𝔽2e (for e > 1) to build a compression function from underlying blockciphers operating in the Davies-Meyer mode. Knudsen and Preneel show, in the complexity-theoretic setting, that finding collisions takes time at least 2(d-1)n2. Preimage resistance, however, is conjectured to be the square of the collision resistance. Our results show that both the collision resistance proof and the preimage resistance conjecture of Knudsen and Preneel are incorrect : With the exception of two of the proposed parameters, the Knudsen-Preneel compression functions do not achieve the security level they were designed for

    The preimage security of double-block-length compression functions

    Get PDF
    We give improved bounds on the preimage security of the three ``classical\u27\u27 double-block-length, double-call, blockcipher-based compression functions, these being Abreast-DM, Tandem-DM and Hirose\u27s scheme. For Hirose\u27s scheme, we show that an adversary must make at least 22n−52^{2n-5} blockcipher queries to achieve chance 0.50.5 of inverting a randomly chosen point in the range. For Abreast-DM and Tandem-DM we show that at least 22n−102^{2n-10} queries are necessary. These bounds improve upon the previous best bounds of Ω(2n)\Omega(2^n) queries, and are optimal up to a constant factor since the compression functions in question have range of size 22n2^{2n}

    More Insights on Blockcipher-Based Hash Functions

    Get PDF
    In this paper we give more insights on the security of blockcipher-based hash functions. We give a very simple criterion to build a secure large class of Single-Block-Length (SBL) or double call Double-Block-Length (DBL) compression functions based on (kn,n)(kn, n) blockciphers, where knkn is the key length and nn is the block length and kk is an integer. This criterion is simpler than previous works in the literature. Based on the criterion, we can get many results from this criterion, and we can get a conclusion on such class of blockcipher-based hash functions. We solved the open problem left by Hirose. Our results show that to build a secure double call DBL compression function, it is required k>=m+1k >= m+1 where mm is the number of message blocks. Thus, we can only build rate 1/2 secure double DBL blockcipher-based compression functions if k==2k==2. At last, we pointed out flaws in Stam\u27s theorem about supercharged functions and gave a revision of this theorem and added another condition for the security of supercharged compression functions
    • 

    corecore