78 research outputs found

    A Note on the Chi-square Method : A Tool for Proving Cryptographic Security

    Get PDF
    In CRYPTO 2017, Dai, Hoang, and Tessaro introduced the {\em Chi-square method} (χ2\chi^2 method) which can be applied to obtain an upper bound on the statistical distance between two joint probability distributions. The authors applied this method to prove the {\em pseudorandom function security} (PRF-security) of sum of two random permutations. In this work, we revisit their proof and find a non-trivial gap in the proof and describe how to plug this gap as well; this has already been done by Dai {\em et al.} in the revised version of their CRYPTO 2017 paper. A complete, correct, and transparent proof of the full security of the sum of two random permutations construction is much desirable, especially due to its importance and two decades old legacy. The proposed χ2\chi^2 method seems to have potential for application to similar problems, where a similar gap may creep into a proof. These considerations motivate us to communicate our observation in a formal way.\par On the positive side, we provide a very simple proof of the PRF-security of the {\em truncated random permutation} construction (a method to construct PRF from a random permutation) using the χ2\chi^2 method. We note that a proof of the PRF-security due to Stam is already known for this construction in a purely statistical context. However, the use of the χ2\chi^2 method makes the proof much simpler

    Deck-Based Wide Block Cipher Modes and an Exposition of the Blinded Keyed Hashing Model

    Get PDF
    We present two tweakable wide block cipher modes from doubly-extendable cryptographic keyed (deck) functions and a keyed hash function: double-decker and docked-double-decker. Double-decker is a direct generalization of Farfalle-WBC of Bertoni et al. (ToSC 2017(4)), and is a four-round Feistel network on two arbitrarily large branches, where the middle two rounds call deck functions and the first and last rounds call the keyed hash function. Docked-double-decker is a variant of double-decker where the bulk of the input to the deck functions is moved to the keyed hash functions. We prove that the distinguishing advantage of the resulting wide block ciphers is simply two times the sum of the pseudorandom function distinguishing advantage of the deck function and the blinded keyed hashing distinguishing advantage of the keyed hash functions. We demonstrate that blinded keyed hashing is more general than the conventional notion of XOR-universality, and that it allows us to instantiate our constructions with keyed hash functions that have a very strong claim on bkh security but not necessarily on XOR-universality, such as Xoofffie (ePrint 2018/767). The bounds of double-decker and docked-double-decker are moreover reduced tweak-dependent, informally meaning that collisions on the keyed hash function for different tweaks only have a limited impact. We describe two use cases that can exploit this property opportunistically to get stronger security than what would be achieved with prior solutions: SSD encryption, where each sector can only be written to a limited number of times, and incremental tweaks, where one includes the state of the system in the variable-length tweak and appends new data incrementally

    Populating the Zoo of Rugged Pseudorandom Permutations

    Get PDF
    A Rugged Pseudorandom Permutation (RPRP) is a variable-input-length tweakable cipher satisfying a security notion that is intermediate between tweakable PRP and tweakable SPRP. It was introduced at CRYPTO 2022 by Degabriele and Karadžić, who additionally showed how to generically convert such a primitive into nonce-based and nonce-hiding AEAD schemes satisfying either misuse-resistance or release-of-unverified-plaintext security as well as Nonce-Set AEAD which has applications in protocols like QUIC and DTLS. Their work shows that RPRPs are powerful and versatile cryptographic primitives. However, the RPRP security notion itself can seem rather contrived, and the motivation behind it is not immediately clear. Moreover, they only provided a single RPRP construction, called UIV, which puts into question the generality of their modular approach and whether other instantiations are even possible. In this work, we address this question positively by presenting new RPRP constructions, thereby validating their modular approach and providing further justification in support of the RPRP security definition. Furthermore, we present a more refined view of their results by showing that strictly weaker RPRP variants, which we introduce, suffice for many of their transformations. From a theoretical perspective, our results show that the well-known three-round Feistel structure achieves stronger security as a permutation than a mere pseudorandom permutation---as was established in the seminal result by Luby and Rackoff. We conclude on a more practical note by showing how to extend the left domain of one RPRP construction for applications that require larger values in order to meet the desired level of security

    Cryptographic Hash Functions in Groups and Provable Properties

    Get PDF
    We consider several "provably secure" hash functions that compute simple sums in a well chosen group (G,*). Security properties of such functions provably translate in a natural way to computational problems in G that are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi ∊ Li such that g1 * g2 *...* gk = 1G. Hardness of the problem in the respective groups follows from some "standard" assumptions used in public-key cryptology such as hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems. Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show that the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively by considerations about available attacks. By introducing "second-order" bounds on bounds on security, we expose the limits of such an approach to provable security. A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures, recent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard problems, suggests that the k-sum problem in some groups may be considered hard on its own, and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness. We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contrast to the original reductions that related to factoring or discrete logarithms. Although the original reductions remain valid, we measure security in a new, more aggressive way. This allows us to relax the parameters and hash faster. We obtain a function that is only three times slower compared to SHA-256 and is estimated to offer at least equivalent collision resistance. The speed can be doubled by the use of a special modulus, such a modified function is supported exclusively by the hardness of multiplicative k-sums modulo a power of two. Our efforts culminate in a new multiplicative k-sum function in finite fields that further generalizes the design of Very Smooth Hash. In contrast to the previous variants, the memory requirements of the new function are negligible. The fastest instance of the function expected to offer 128-bit collision resistance runs at 24 cycles per byte on an Intel Core i7 processor and approaches the 17.4 figure of SHA-256. The new functions proposed in this thesis do not provably achieve a usual security property such as preimage or collision resistance from a well-established assumption. They do however enjoy unconditional provable separation of inputs that collide. Changes in input that are small with respect to a well defined measure never lead to identical output in the compression function

    Design and Analysis of Multi-Block-Length Hash Functions

    Get PDF
    Cryptographic hash functions are used in many cryptographic applications, and the design of provably secure hash functions (relative to various security notions) is an active area of research. Most of the currently existing hash functions use the Merkle-DamgĂĄrd paradigm, where by appropriate iteration the hash function inherits its collision and preimage resistance from the underlying compression function. Compression functions can either be constructed from scratch or be built using well-known cryptographic primitives such as a blockcipher. One classic type of primitive-based compression functions is single-block-length : It contains designs that have an output size matching the output length n of the underlying primitive. The single-block-length setting is well-understood. Yet even for the optimally secure constructions, the (time) complexity of collision- and preimage-finding attacks is at most 2n/2, respectively 2n ; when n = 128 (e.g., Advanced Encryption Standard) the resulting bounds have been deemed unacceptable for current practice. As a remedy, multi-block-length primitive-based compression functions, which output more than n bits, have been proposed. This output expansion is typically achieved by calling the primitive multiple times and then combining the resulting primitive outputs in some clever way. In this thesis, we study the collision and preimage resistance of certain types of multi-call multi-block-length primitive-based compression (and the corresponding Merkle-DamgĂĄrd iterated hash) functions : Our contribution is three-fold. First, we provide a novel framework for blockcipher-based compression functions that compress 3n bits to 2n bits and that use two calls to a 2n-bit key blockcipher with block-length n. We restrict ourselves to two parallel calls and analyze the sufficient conditions to obtain close-to-optimal collision resistance, either in the compression function or in the Merkle-DamgĂĄrd iteration. Second, we present a new compression function h: {0,1}3n → {0,1}2n ; it uses two parallel calls to an ideal primitive (public random function) from 2n to n bits. This is similar to MDC-2 or the recently proposed MJH by Lee and Stam (CT-RSA'11). However, unlike these constructions, already in the compression function we achieve that an adversary limited (asymptotically in n) to O (22n(1-δ)/3) queries (for any δ > 0) has a disappearing advantage to find collisions. This is the first construction of this type offering collision resistance beyond 2n/2 queries. Our final contribution is the (re)analysis of the preimage and collision resistance of the Knudsen-Preneel compression functions in the setting of public random functions. Knudsen-Preneel compression functions utilize an [r,k,d] linear error-correcting code over 𝔽2e (for e > 1) to build a compression function from underlying blockciphers operating in the Davies-Meyer mode. Knudsen and Preneel show, in the complexity-theoretic setting, that finding collisions takes time at least 2(d-1)n2. Preimage resistance, however, is conjectured to be the square of the collision resistance. Our results show that both the collision resistance proof and the preimage resistance conjecture of Knudsen and Preneel are incorrect : With the exception of two of the proposed parameters, the Knudsen-Preneel compression functions do not achieve the security level they were designed for

    Indifferentiable Authenticated Encryption

    Get PDF
    We study Authenticated Encryption with Associated Data (AEAD) from the viewpoint of composition in arbitrary (single-stage) environments. We use the indifferentiability framework to formalize the intuition that a “good” AEAD scheme should have random ciphertexts subject to decryptability. Within this framework, we can then apply the indifferentiability composition theorem to show that such schemes offer extra safeguards wherever the relevant security properties are not known, or cannot be predicted in advance, as in general-purpose crypto libraries and standards. We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least 6, needed for blockciphers, to only 3 for encryption. This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds

    An Analysis of the Blockcipher-Based Hash Functions from PGV

    Get PDF
    Preneel, Govaerts, and Vandewalle (1993) considered the 64 most basic ways to construct a hash function H: {0, 1}*->{0, 1}(n) from a blockcipher E: {0, 1}(n) x {0, 1}(n)->{0,1}(n). They regarded 12 of these 64 schemes as secure, though no proofs or formal claims were given. Here we provide a proof-based treatment of the PGV schemes. We show that, in the ideal-cipher model, the 12 schemes considered secure by PGV really are secure: we give tight upper and lower bounds on their collision resistance. Furthermore, by stepping outside of the Merkle-Damgard approach to analysis, we show that an additional 8 of the PGV schemes are just as collision resistant (up to a constant). Nonetheless, we are able to differentiate among the 20 collision-resistant schemes by considering their preimage resistance: only the 12 initial schemes enjoy optimal preimage resistance. Our work demonstrates that proving ideal-cipher-model bounds is a feasible and useful step for understanding the security of blockcipher-based hash-function constructions

    Forkcipher: A New Primitive for Authenticated Encryption of Very Short Messages

    Get PDF
    This is an extended version of the article with the same title accepted at Asiacrypt 2019.International audienceHighly efficient encryption and authentication of short messages is an essential requirement for enabling security in constrained scenarios such as the CAN FD in automotive systems (max. message size 64 bytes), massive IoT, critical communication domains of 5G, and Narrowband IoT, to mention a few. In addition, one of the NIST lightweight cryptography project requirements is that AEAD schemes shall be “optimized to be efficient for short messages (e.g., as short as 8 bytes)”. In this work we introduce and formalize a novel primitive in symmetric cryptography called a forkcipher. A forkcipher is a keyed function expanding a fixed-length input to a fixed-length output. We define its security as indistinguishability under chosen ciphertext attack. We give a generic construction validation via the new iterate-fork-iterate design paradigm. We then propose ForkSkinny as a concrete forkcipher instance with a public tweak and based on SKINNY: a tweakable lightweight block cipher constructed using the TWEAKEY framework. We conduct extensive cryptanalysis of ForkSkinny against classical and structure-specific attacks. We demonstrate the applicability of forkciphers by designing three new provably-secure, nonce-based AEAD modes which offer performance and security tradeoffs and are optimized for efficiency of very short messages. Considering a reference block size of 16 bytes, and ignoring possible hardware optimizations, our new AEAD schemes beat the best SKINNY-based AEAD modes. More generally, we show forkciphers are suited for lightweight applications dealing with predominantly short messages, while at the same time allowing handling arbitrary messages sizes. Furthermore, our hardware implementation results show that when we exploit the inherent parallelism of ForkSkinny we achieve the best performance when directly compared with the most efficient mode instantiated with the SKINNY block cipher

    Integrated-Key Cryptographic Hash Functions

    Get PDF
    Cryptographic hash functions have always played a major role in most cryptographic applications. Traditionally, hash functions were designed in the keyless setting, where a hash function accepts a variable-length message and returns a fixed-length fingerprint. Unfortunately, over the years, significant weaknesses were reported on instances of some popular ``keyless" hash functions. This has motivated the research community to start considering the dedicated-key setting, where a hash function is publicly keyed. In this approach, families of hash functions are constructed such that the individual members are indexed by different publicly-known keys. This has, evidently, also allowed for more rigorous security arguments. However, it turns out that converting an existing keyless hash function into a dedicated-key one is usually non-trivial since the underlying keyless compression function of the keyless hash function does not normally accommodate the extra key input. In this thesis we define and formalise a flexible approach to solve this problem. Hash functions adopting our approach are said to be constructed in the integrated-key setting, where keyless hash functions are seamlessly and transparently transformed into keyed variants by introducing an extra component accompanying the (still keyless) compression function to handle the key input separately outside the compression function. We also propose several integrated-key constructions and prove that they are collision resistant, pre-image resistant, 2nd pre-image resistant, indifferentiable from Random Oracle (RO), indistinguishable from Pseudorandom Functions (PRFs) and Unforgeable when instantiated as Message Authentication Codes (MACs) in the private key setting. We further prove that hash functions constructed in the integrated-key setting are indistinguishable from their variants in the conventional dedicated-key setting, which implies that proofs from the dedicated-key setting can be naturally reduced to the integrated-key setting.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    The elastic use of 'some': a comparative study between l1 and l2 speakers in educational settings

    Get PDF
    This study explored some using a refreshing approach: focusing on its elasticity. It was a comparative study of L1 (American) and L2 (Chinese and Vietnamese) speakers and found that L2 speakers are vaguer than L1 speakers, and that the elasticity of some is manifested through the fluid, stretchable and strategic features of some’s pragmatic meanings and functions. The implication is that an understanding of its elastic nature may be integrated into the curriculum of English language teaching
    • …
    corecore