84 research outputs found

    Blockcipher Based Hashing Revisited

    Get PDF
    We revisit the rate-1 blockcipher based hash functions as first studied by Preneel, Govaerts and Vandewalle (Crypto\u2793) and later extensively analysed by Black, Rogaway and Shrimpton (Crypto\u2702). We analyze a further generalization where any pre- and postprocessing is considered. By introducing a new tweak to earlier proof methods, we obtain a simpler proof that is both more general and more tight than existing results. As added benefit, this also leads to a clearer understanding of the current classification of rate-1 blockcipher based schemes as introduced by Preneel et al. and refined by Black et al

    Preimage resistance beyond the birthday bound: Double-length hashing revisited

    Get PDF
    Security proofs are an essential part of modern cryptography. Often the challenge is not to come up with appropriate schemes but rather to technically prove that these satisfy the desired security properties. We provide for the first time techniques for proving asymptotically optimal preimage resistance bounds for block cipher based double length, double call hash functions. More precisely, we consider for some \keylength>\blocklength compression functions H:\{0,1\}^{\keylength+\blocklength} \rightarrow \{0,1\}^{2\blocklength} using two calls to an ideal block cipher with an \blocklength-bit block size. Optimally, an adversary trying to find a preimage for HH should require \Omega(2^{2\blocklength}) queries to the underlying block cipher. As a matter of fact there have been several attempts to prove the preimage resistance of such compression functions, but no proof did go beyond the \Omega(2^{\blocklength}) barrier, therefore leaving a huge gap when compared to the optimal bound. In this paper, we introduce two new techniques on how to lift this bound to \Omega(2^{2\blocklength}). We demonstrate our new techniques for a simple and natural design of HH, being the concatenation of two instances of the well-known Davies-Meyer compression function

    Oblivious Hashing Revisited, and Applications to Asymptotically Efficient ORAM and OPRAM

    Get PDF
    Oblivious RAM (ORAM) is a powerful cryptographic building block that allows a program to provably hide its access patterns to sensitive data. Since the original proposal of ORAM by Goldreich and Ostrovsky, numerous improvements have been made. To date, the best asymptotic overhead achievable for general block sizes is O(log2N/loglogN)O(\log^2 N/\log \log N), due to an elegant scheme by Kushilevitz et al., which in turn relies on the oblivious Cuckoo hashing scheme by Goodrich and Mitzenmacher. In this paper, we make the following contributions: we first revisit the prior O(log2N/loglogN)O(\log^2 N/\log \log N)-overhead ORAM result. We demonstrate the somewhat incompleteness of this prior result, due to the subtle incompleteness of a core building block, namely, Goodrich and Mitzenmacher\u27s oblivious Cuckoo hashing scheme. Even though we do show how to patch the prior result such that we can fully realize Goodrich and Mitzenmacher\u27s elegant blueprint for oblivious Cuckoo hashing, it is clear that the extreme complexity of oblivious Cuckoo hashing has made understanding, implementation, and proofs difficult. We show that there is a conceptually simple O(log2N/loglogN)O(\log^2 N/\log \log N)-overhead ORAM that dispenses with oblivious Cuckoo hashing entirely. We show that such a conceptually simple scheme lends to further extensions. Specifically, we obtain the first O(log2N/loglogN)O(\log^2 N/\log \log N) Oblivious Parallel RAM (OPRAM) scheme, thus not only matching the performance of the best known sequential ORAM, but also achieving super-logarithmic improvements in comparison with known OPRAM schemes

    A Case-Based Reasoning Method for Locating Evidence During Digital Forensic Device Triage

    Get PDF
    The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application

    More is Less: Perfectly Secure Oblivious Algorithms in the Multi-Server Setting

    Get PDF
    The problem of Oblivious RAM (ORAM) has traditionally been studied in a single-server setting, but more recently the multi-server setting has also been considered. Yet it is still unclear whether the multi-server setting has any inherent advantages, e.g., whether the multi-server setting can be used to achieve stronger security goals or provably better efficiency than is possible in the single-server case. In this work, we construct a perfectly secure 3-server ORAM scheme that outperforms the best known single-server scheme by a logarithmic factor. In the process, we also show, for the first time, that there exist specific algorithms for which multiple servers can overcome known lower bounds in the single-server setting.Comment: 36 pages, Accepted in Asiacrypt 201

    HeW: AHash Function based on Lightweight Block Cipher FeW

    Get PDF
    A new hash function HeW: A hash function based on light weight block cipher FeW is proposed in this paper. The compression function of HeW is based on block cipher FeW. It is believed that key expansion algorithm of block cipher slows down the performance of the overlying hash function. Thereby, block ciphers become a less favourable choice to design a compression function. As a countermeasure, we cut down the key size of FeW from 80-bit to 64-bit and provide a secure and efficient key expansion algorithm for the modified key size. FeW based compression function plays a vital role to enhance the efficiency of HeW. We test the hash output for randomness using the NIST statistical test suite and test the avalanche effect, bit variance and near collision resistance. We also give the security estimates of HeW against differential cryptanalysis, length extension attack, slide attack and rotational distinguisher.
    corecore