2 research outputs found

    Attacking the Knudsen-Preneel Compression Functions

    Full text link
    Abstract. Knudsen and Preneel (Asiacrypt’96 and Crypto’97) introduced a hash function design in which a linear error-correcting code is used to build a wide-pipe compression function from underlying blockciphers operating in Davies-Meyer mode. Their main design goal was to deliver compression functions with collision resistance up to, and even beyond, the block size of the underlying blockciphers. In this paper, we (re)analyse the preimage resistance of the Knudsen-Preneel compression functions in the setting of public random func-tions. We give a new preimage attack that is based on two observations. First, by using the right kind of queries it is possible to mount a non-adaptive preimage attack that is optimal in terms of query complexity. Second, by exploiting the dual code the subsequent problem of reconstructing a preimage from the queries can be rephrased as a problem related to the generalized birthday problem. As a consequence, the time complexity of our attack is intimately tied to the minimum distance of the dual code. Our new attack consistently beats the one given by Knudsen and Preneel (in one case our preimage attack even beats their collision attack) and demonstrates that the gap between their claimed collision resistance and the actual preimage resistance is surprisingly small. Moreover, our new attack falsifies their (conjectured) preimage resistance security bound and shows that intuitive bounds based on the number of ‘active ’ components can be treacherous. Complementing our attack is a formal analysis of the query complexity (both lower and upper bounds) of preimage-finding attacks. This analysis shows that for many concrete codes the time complexity of our attack is optimal.

    Design and Analysis of Multi-Block-Length Hash Functions

    Get PDF
    Cryptographic hash functions are used in many cryptographic applications, and the design of provably secure hash functions (relative to various security notions) is an active area of research. Most of the currently existing hash functions use the Merkle-DamgÃ¥rd paradigm, where by appropriate iteration the hash function inherits its collision and preimage resistance from the underlying compression function. Compression functions can either be constructed from scratch or be built using well-known cryptographic primitives such as a blockcipher. One classic type of primitive-based compression functions is single-block-length : It contains designs that have an output size matching the output length n of the underlying primitive. The single-block-length setting is well-understood. Yet even for the optimally secure constructions, the (time) complexity of collision- and preimage-finding attacks is at most 2n/2, respectively 2n ; when n = 128 (e.g., Advanced Encryption Standard) the resulting bounds have been deemed unacceptable for current practice. As a remedy, multi-block-length primitive-based compression functions, which output more than n bits, have been proposed. This output expansion is typically achieved by calling the primitive multiple times and then combining the resulting primitive outputs in some clever way. In this thesis, we study the collision and preimage resistance of certain types of multi-call multi-block-length primitive-based compression (and the corresponding Merkle-DamgÃ¥rd iterated hash) functions : Our contribution is three-fold. First, we provide a novel framework for blockcipher-based compression functions that compress 3n bits to 2n bits and that use two calls to a 2n-bit key blockcipher with block-length n. We restrict ourselves to two parallel calls and analyze the sufficient conditions to obtain close-to-optimal collision resistance, either in the compression function or in the Merkle-DamgÃ¥rd iteration. Second, we present a new compression function h: {0,1}3n → {0,1}2n ; it uses two parallel calls to an ideal primitive (public random function) from 2n to n bits. This is similar to MDC-2 or the recently proposed MJH by Lee and Stam (CT-RSA'11). However, unlike these constructions, already in the compression function we achieve that an adversary limited (asymptotically in n) to O (22n(1-δ)/3) queries (for any δ > 0) has a disappearing advantage to find collisions. This is the first construction of this type offering collision resistance beyond 2n/2 queries. Our final contribution is the (re)analysis of the preimage and collision resistance of the Knudsen-Preneel compression functions in the setting of public random functions. Knudsen-Preneel compression functions utilize an [r,k,d] linear error-correcting code over 𝔽2e (for e > 1) to build a compression function from underlying blockciphers operating in the Davies-Meyer mode. Knudsen and Preneel show, in the complexity-theoretic setting, that finding collisions takes time at least 2(d-1)n2. Preimage resistance, however, is conjectured to be the square of the collision resistance. Our results show that both the collision resistance proof and the preimage resistance conjecture of Knudsen and Preneel are incorrect : With the exception of two of the proposed parameters, the Knudsen-Preneel compression functions do not achieve the security level they were designed for
    corecore