21 research outputs found

    Fiat-Shamir for highly sound protocols is instantiable

    Get PDF
    The Fiat–Shamir (FS) transformation (Fiat and Shamir, Crypto '86) is a popular paradigm for constructing very efficient non-interactive zero-knowledge (NIZK) arguments and signature schemes from a hash function and any three-move interactive protocol satisfying certain properties. Despite its wide-spread applicability both in theory and in practice, the known positive results for proving security of the FS paradigm are in the random oracle model only, i.e., they assume that the hash function is modeled as an external random function accessible to all parties. On the other hand, a sequence of negative results shows that for certain classes of interactive protocols, the FS transform cannot be instantiated in the standard model. We initiate the study of complementary positive results, namely, studying classes of interactive protocols where the FS transform does have standard-model instantiations. In particular, we show that for a class of “highly sound” protocols that we define, instantiating the FS transform via a q-wise independent hash function yields NIZK arguments and secure signature schemes. In the case of NIZK, we obtain a weaker “q-bounded” zero-knowledge flavor where the simulator works for all adversaries asking an a-priori bounded number of queries q; in the case of signatures, we obtain the weaker notion of random-message unforgeability against q-bounded random message attacks. Our main idea is that when the protocol is highly sound, then instead of using random-oracle programming, one can use complexity leveraging. The question is whether such highly sound protocols exist and if so, which protocols lie in this class. We answer this question in the affirmative in the common reference string (CRS) model and under strong assumptions. Namely, assuming indistinguishability obfuscation and puncturable pseudorandom functions we construct a compiler that transforms any 3-move interactive protocol with instance-independent commitments and simulators (a property satisfied by the Lapidot–Shamir protocol, Crypto '90) into a compiled protocol in the CRS model that is highly sound. We also present a second compiler, in order to be able to start from a larger class of protocols, which only requires instance-independent commitments (a property for example satisfied by the classical protocol for quadratic residuosity due to Blum, Crypto '81). For the second compiler we require dual-mode commitments. We hope that our work inspires more research on classes of (efficient) 3-move protocols where Fiat–Shamir is (efficiently) instantiable

    Universal Computational Extractors and the Superfluous Padding Assumption for Indistinguishability Obfuscation

    Get PDF
    Universal Computational Extractors (UCEs), introduced by Bellare, Hoang and Keelveedhi (CRYPTO 2013), are a framework of assumptions on hash functions that allow to instantiate random oracles in a large variety of settings. Brzuska, Farshim and Mittelbach (CRYPTO 2014) showed that a large class of UCE assumptions with \emph{computationally} unpredictable sources cannot be achieved, if indistinguishability obfuscation exists. In the process of circumventing obfuscation-based attacks, new UCE notions emerged, most notably UCEs with respect to \emph{statistically} unpredictable sources that suffice for a large class of applications. However, the only standard model constructions of UCEs are for a small subclass considering only qq-query sources which are \emph{strongly statistically} unpredictable (Brzuska, Mittelbach; Asiacrypt 2014). The contributions of this paper are threefold: 1) We show a surprising equivalence for the notions of strong unpredictability and (plain) unpredictability thereby lifting the construction from Brzuska and Mittelbach to achieve qq-query UCEs for statistically unpredictable sources. This yields standard model instantiations for various (qq-query) primitives including, deterministic public-key encryption, message-locked encryption, multi-bit point obfuscation, CCA-secure encryption, and more. For some of these, our construction yields the first standard model candidate. 2) We study the blow-up that occurs in indistinguishability obfuscation proof techniques due to puncturing and state the \emph{Superfluous Padding Assumption} for indistinguishability obfuscation which allows us to lift the qq-query restriction of our construction. We validate the assumption by showing that it holds for virtual black-box obfuscation. 3) Brzuska and Mittelbach require a strong form of point obfuscation secure in the presence of auxiliary input for their construction of UCEs. We show that this assumption is indeed necessary for the construction of injective UCEs

    An Overview of the Hybrid Argument

    Get PDF
    The hybrid argument is a fundamental and well-established proof technique of modern cryptography for showing the indistinguishability of distributions. As such, its details are often glossed over and phrases along the line of this can be proven via a standard hybrid argument are common in the cryptographic literature. Yet, the hybrid argument is not always as straightforward as we make it out to be, but instead comes with its share of intricacies. For example, a commonly stated variant says that if one has a sequence of hybrids H0,...,HtH_0,...,H_t, and each pair HiH_i, Hi+1H_{i+1} is computationally indistinguishable, then so are the extreme hybrids H0H_0 and HtH_t. We iterate the fact that, in this form, the statement is only true for constant tt, and we translate the common approach for general tt into a rigorous statement. The paper here is not a research paper in the traditional sense. It mainly consists of an excerpt from the book The Theory of Hash Functions and Random Oracles - An Approach to Modern Cryptography (Information Security and Cryptography, Springer, 2021), providing a detailed discussion of the intricacies of the hybrid argument that we believe is of interest to the broader cryptographic community. The excerpt is reproduced with permission of Springer

    Reset Indifferentiability and its Consequences

    Get PDF
    The equivalence of the random-oracle model and the ideal-cipher model has been studied in a long series of results. Holenstein, KĂĽnzler, and Tessaro (STOC, 2011) have recently completed the picture positively, assuming that, roughly speaking, equivalence is indifferentiability from each other. However, under the stronger notion of reset indifferentiability this picture changes significantly, as Demay et al. (EUROCRYPT, 2013) and Luykx et al. (ePrint, 2012) demonstrate. We complement these latter works in several ways. First, we show that any simulator satisfying the reset indifferentiability notion must be stateless and pseudo deterministic. Using this characterization we show that, with respect to reset indifferentiability, two ideal models are either equivalent or incomparable, that is, a model cannot be strictly stronger than the other model. In the case of the random-oracle model and the ideal-cipher model, this implies that the two are incomparable. Finally, we examine weaker notions of reset indifferentiability that, while not being able to allow composition in general, allow composition for a large class of multi-stage games. Here we show that the seemingly much weaker notion of 1-reset indifferentiability proposed by Luykx et al. is equivalent to reset indifferentiability. Hence, the impossibility of coming up with a reset-indifferentiable construction transfers to the setting where only one reset is permitted, thereby re-opening the quest for an achievable and meaningful notion in between the two variants

    A Cryptographic Analysis of OPACITY

    Get PDF
    We take a closer look at the Open Protocol for Access Control, Identification, and Ticketing with privacY (OPACITY). This Diffie--Hellman-based protocol is supposed to provide a secure and privacy-friendly key establishment for contactless environments. It is promoted by the US Department of Defense and meanwhile available in several standards such as ISO/IEC 24727-6 and ANSI 504-1. To the best of our knowledge, so far no detailed cryptographic analysis has been publicly available. Thus, we investigate in how far the common security properties for authenticated key exchange and impersonation resistance, as well as privacy-related properties like untraceability and deniability, are met. OPACITY is not a single protocol but, in fact, a suite consisting of two protocols, one called Zero-Key Management (ZKM) and the other one named Fully Secrecy (FS). Our results indicate that the ZKM version does not achieve even very basic security guarantees. The FS protocol, on the other hand, provides a decent level of security for key establishment. Yet, our results show that the persistent-binding steps, for re-establishing previous connections, conflict with fundamental privacy properties

    Hash Combiners for Second Pre-Image Resistance, Target Collision Resistance and Pre-Image Resistance have Long Output

    Get PDF
    Abstract. A (k, l) hash-function combiner for property P is a construction that, given access to l hash functions, yields a single cryptographic hash function which has property P as long as at least k out of the l hash functions have that property. Hash function combiners are used to hedge against the failure of one or more of the individual components. One example of the application of hash function combiners are the previous versions of the TLS and SSL protocols [10, 8]. The concatenation combiner which simply concatenates the outputs of all hash functions is an example of a robust combiner for collision resistance. However, its output length is, naturally, significantly longer than each individual hash-function output, while the security bounds are not necessarily stronger than that of the strongest input hash-function. In 2006 Boneh and Boyen asked whether a robust black-box combiner for collision resistance can exist that has an output length which is significantly less than that of the concatenation combiner [4]. Regrettably, this question has since been answered in the negative for fully black-box constructions (where hash function and adversary access is being treated as blackbox), that is, combiners (in this setting) for collision resistance roughly need at least the length of the concatenation combiner to be robust [4, 5, 16, 17]. In this paper we examine weaker notions of collision resistance, namely: second pre-image resistanc

    Cryptophia\u27s Short Combiner for Collision-Resistant Hash Functions

    No full text
    A combiner for collision-resistant hash functions takes two functions as input and implements a hash function with the guarantee that it is collision-resistant if one of the functions is. It has been shown that such a combiner cannot have short output (Pietrzak, Crypto 2008); that is, its output length is lower bounded by roughly 2n2n if the ingoing functions output nn-bit hash values. In this paper, we present two novel definitions for hash function combiners that allow to bypass the lower bound: the first is an extended semi-black-box definition. The second is a new game-based, fully black-box definition which allows to better analyze combiners in idealized settings such as the random-oracle model or indifferentiability framework (Maurer, Renner, and Holenstein, TCC 2004). We then present a new combiner which is robust for pseudorandom functions (in the traditional sense), which does not increase the output length of its underlying functions and which is collision-resistant in the indifferentiability setting. Our combiner is particularly relevant in practical scenarios, where security proofs are often given in idealized models, and our combiner, in the same idealized model, yields strong security guarantees while remaining short

    Iowa DNR Wastewater News, April 12, 2021

    Get PDF
    E-newsletter providing information about Iowa natural resources activities across the state. Produced by the Iowa Department of Natural Resources

    Random Oracles in the Standard Model

    No full text
    Provable security is a fundamental concept of modern cryptography (see, e.g., Katz and Lindell; Introduction to Modern Cryptography, Chapter 1, 2007). In order to argue about security, we first require a precise and rigorous definition of what security means (e.g., a definition of secure encryption). Such a security definition, in particular, contains a description of the capabilities of an adversary, i.e., a model of the adversary which tries to come as close to reality as possible. Given a security model, the provable security approach is to provide a mathematical proof that a given construction achieves the desired level of security. In other words, we prove that no (efficient) adversary can break the security with good probability given that it behaves as defined within the security model. Typically, proofs reduce the security of a construction to an unproven cryptographic hardness-assumption---we show that the existence of an adversary violates an assumption---which, preferably, is as simple as possible. Furthermore, any assumption needs to be stated precisely. Examples of cryptographic hardness assumptions can be complexity-theoretic assumptions, such as, P vs. NP, the assumed existence of objects, such as, one-way functions, or long standing open problems from number theory, such as, factoring large integers or computing the discrete log in certain groups. A widely used technique for the construction of cryptographic schemes is the so-called random oracle methodology, introduced in 1993 by Bellare and Rogaway (CCS, 1993). As before, we start with a precise model of security which is extended to include a uniformly random function which may be evaluated by any party (including the construction and adversary). As a random function has a huge, if not infinite description, parties cannot be given its code but, instead, are provided with black-box access to an oracle which evaluates the function for them. This oracle is called the random oracle. Then, as before, a mathematical proof is given that a construction is secure as per definition relative to the random oracle. Finally, to implement the scheme in practice, the random oracle is replaced by a cryptographic hash function (such as SHA-3). No mathematical model can fully capture reality and, thus, cast in the framework of provable security, we may consider a random oracle security model as being somewhat further away from reality than a standard security model: in addition to the abstractions of the standard security model it is assumed that adversaries do not make any use of the code of a hash function; it is assumed that they do not even evaluate the hash function on their own but use an external device for the evaluation (i.e., black-box access). Now, if we trust schemes that are designed according to the provable security approach, should we then also trust schemes devised via the random oracle methodology? This question has lead to a huge debate within the cryptographic community and has been discussed for more than two decades. The discussion is fueled by results showing that the extension of security models to include a random oracle may produce provably secure schemes that cannot be securely implemented. In 1998, Canetti, Goldreich, and Halevi (STOC, 1998) showed that schemes exist that are inherently insecure but which should be secure according to the random oracle methodology. In more detail, they present a public-key encryption scheme which an adversary can trivially attack when given the code of the hash function that was used to replace the random oracle. Note that this differs from, e.g., side-channel attacks: while here also the attack is successful because it is outside the model, implementations can, potentially, protect against these attacks and, thus, secure implementations may exist. With the scheme presented by Canetti et al., on the other hand, there is, provably, no secure implementation although the scheme is secure in the random oracle model. Despite these negative results, many of the schemes that we trust on a daily basis---examples include the standardized public-key encryption scheme RSA-OAEP as well as the standardized signature schemes RSA-PSS and DSA---only have proofs in the random oracle model. Similarly, for many advanced cryptographic concepts, including IND-secure deterministic public-key encryption, correlated-input secure hash functions, universal hardcore functions, and many others, we (so far) only have constructions in the random oracle model. One reason for the success of random oracles is that they allow to design very efficient and natural schemes. Furthermore, the power of random oracles enables us to realize concepts which we would not know how to implement without random oracles. A third, and very compelling argument in favor of the random oracle methodology is that the random oracle heuristic seems to be a good one: no random oracle scheme which was not designed to portrait inconsistencies of the random oracle model has been attacked due to the use of random oracles. However, if a scheme is, indeed, secure, should we then not be able to understand and pinpoint the underlying source of hardness? In this thesis we study random oracles with the help of program obfuscation (in particular indistinguishability obfuscation and point-function obfuscation). The study of obfuscation has a long tradition in computer science, and specifically in cryptography, but only recent advancements gave rise to the first candidate constructions of provably secure general-purpose indistinguishability obfuscators (Garg et al.; FOCS, 2013). Intuitively, a program obfuscator takes as input a program and produces a functionally equivalent but unintelligible program, i.e., the obfuscated program hides how it operates. Conceptually, this is very close to one of the fundamental abstractions made within the random oracle model where hash functions do not have an explicit and efficient description but can be evaluated only via black-box access to the random oracle. While an obfuscated hash function still has an explicit and efficient description, the description should hide the way the function works and, thus, intuitively, should be of no help to any adversary. Using obfuscation-based techniques we show how to instantiate the random oracle in various cryptographic constructions. Amongst others, we obtain the first standard model (i.e., without random oracles) candidate construction for a universal hardcore function with long output, a q-query correlated-input secure hash function, and q-query IND-secure deterministic public-key encryption. We obtain our positive results by instantiating various forms of universal computational extractors. The universal computational extractor (UCE) framework was introduced by Bellare, Hoang, and Keelveedhi (CRYPTO, 2013) to provide (very strong) standard-model notions of hash functions that allow instantiating random oracles in a wide range of applications. Intriguingly, even though obfuscation allows us to show how to replace random oracles in certain situations, it also allows us to show limitations of the random oracle methodology as well as of UCEs. Using obfuscation-based techniques, we prove that several concrete UCE assumptions (including all originally proposed assumptions) cannot hold in case indistinguishability obfuscation exists. (We note that these negative results inspired the weaker UCE notions that lay at the core of our positive constructions.) Assuming the existence of indistinguishability obfuscation also allows us to extend the uninstantiability techniques of Canetti et al. (STOC, 1998) and show that a large class of random-oracle transformations are not sound. This affects the Encrypt-with-Hash transformation (Bellare et al.; CRYPTO, 2007) to construct deterministic public-key encryption, as well as the widely used Fujisaki--Okamoto transformation (Fujisaki, Okamoto; CRYPTO, 1999) which transforms weak public-key encryption schemes into strong public-key encryption schemes. An often repeated criticism of random-oracle uninstantiability results is that the schemes only fail to be secure because they are designed to do so and, furthermore, their artificial design conflicts good cryptographic practice (see, for example, Koblitz and Menezes; Journal of Cryptology, 2007). Similar criticism can be voiced also for our counterexamples to the general applicability of the above mentioned random-oracle transformations. While this does not refute the mathematical validity of such uninstantiability results, we do, however, also present a very different counterexample to the soundness of the random oracle methodology: we show that if indistinguishability obfuscation exists, then a strong variant of point-function obfuscation (which can be similarly interpreted as a strong form of symmetric encryption) cannot be achieved without the help of random oracles while at the same time there are simple and elegant constructions in the random oracle model. We note that the same holds also for our negative results for UCEs. In summary, we develop techniques to work with obfuscation which allow us to show that the existence of indistinguishability obfuscation implies that various random oracle techniques may lead to insecure schemes. Our results suggest, once again, that we should be careful with random oracle proofs and we hope that they spark further research to overcome the necessity to use random oracles in the first place. Concerning the latter, we make first steps by proposing new and widely applicable UCE notions together with standard-model candidate constructions showing that UCEs may, indeed, be a viable alternative to the use of random oracles

    Random Oracles in the Standard Model

    Get PDF
    Provable security is a fundamental concept of modern cryptography (see, e.g., Katz and Lindell; Introduction to Modern Cryptography, Chapter 1, 2007). In order to argue about security, we first require a precise and rigorous definition of what security means (e.g., a definition of secure encryption). Such a security definition, in particular, contains a description of the capabilities of an adversary, i.e., a model of the adversary which tries to come as close to reality as possible. Given a security model, the provable security approach is to provide a mathematical proof that a given construction achieves the desired level of security. In other words, we prove that no (efficient) adversary can break the security with good probability given that it behaves as defined within the security model. Typically, proofs reduce the security of a construction to an unproven cryptographic hardness-assumption---we show that the existence of an adversary violates an assumption---which, preferably, is as simple as possible. Furthermore, any assumption needs to be stated precisely. Examples of cryptographic hardness assumptions can be complexity-theoretic assumptions, such as, P vs. NP, the assumed existence of objects, such as, one-way functions, or long standing open problems from number theory, such as, factoring large integers or computing the discrete log in certain groups. A widely used technique for the construction of cryptographic schemes is the so-called random oracle methodology, introduced in 1993 by Bellare and Rogaway (CCS, 1993). As before, we start with a precise model of security which is extended to include a uniformly random function which may be evaluated by any party (including the construction and adversary). As a random function has a huge, if not infinite description, parties cannot be given its code but, instead, are provided with black-box access to an oracle which evaluates the function for them. This oracle is called the random oracle. Then, as before, a mathematical proof is given that a construction is secure as per definition relative to the random oracle. Finally, to implement the scheme in practice, the random oracle is replaced by a cryptographic hash function (such as SHA-3). No mathematical model can fully capture reality and, thus, cast in the framework of provable security, we may consider a random oracle security model as being somewhat further away from reality than a standard security model: in addition to the abstractions of the standard security model it is assumed that adversaries do not make any use of the code of a hash function; it is assumed that they do not even evaluate the hash function on their own but use an external device for the evaluation (i.e., black-box access). Now, if we trust schemes that are designed according to the provable security approach, should we then also trust schemes devised via the random oracle methodology? This question has lead to a huge debate within the cryptographic community and has been discussed for more than two decades. The discussion is fueled by results showing that the extension of security models to include a random oracle may produce provably secure schemes that cannot be securely implemented. In 1998, Canetti, Goldreich, and Halevi (STOC, 1998) showed that schemes exist that are inherently insecure but which should be secure according to the random oracle methodology. In more detail, they present a public-key encryption scheme which an adversary can trivially attack when given the code of the hash function that was used to replace the random oracle. Note that this differs from, e.g., side-channel attacks: while here also the attack is successful because it is outside the model, implementations can, potentially, protect against these attacks and, thus, secure implementations may exist. With the scheme presented by Canetti et al., on the other hand, there is, provably, no secure implementation although the scheme is secure in the random oracle model. Despite these negative results, many of the schemes that we trust on a daily basis---examples include the standardized public-key encryption scheme RSA-OAEP as well as the standardized signature schemes RSA-PSS and DSA---only have proofs in the random oracle model. Similarly, for many advanced cryptographic concepts, including IND-secure deterministic public-key encryption, correlated-input secure hash functions, universal hardcore functions, and many others, we (so far) only have constructions in the random oracle model. One reason for the success of random oracles is that they allow to design very efficient and natural schemes. Furthermore, the power of random oracles enables us to realize concepts which we would not know how to implement without random oracles. A third, and very compelling argument in favor of the random oracle methodology is that the random oracle heuristic seems to be a good one: no random oracle scheme which was not designed to portrait inconsistencies of the random oracle model has been attacked due to the use of random oracles. However, if a scheme is, indeed, secure, should we then not be able to understand and pinpoint the underlying source of hardness? In this thesis we study random oracles with the help of program obfuscation (in particular indistinguishability obfuscation and point-function obfuscation). The study of obfuscation has a long tradition in computer science, and specifically in cryptography, but only recent advancements gave rise to the first candidate constructions of provably secure general-purpose indistinguishability obfuscators (Garg et al.; FOCS, 2013). Intuitively, a program obfuscator takes as input a program and produces a functionally equivalent but unintelligible program, i.e., the obfuscated program hides how it operates. Conceptually, this is very close to one of the fundamental abstractions made within the random oracle model where hash functions do not have an explicit and efficient description but can be evaluated only via black-box access to the random oracle. While an obfuscated hash function still has an explicit and efficient description, the description should hide the way the function works and, thus, intuitively, should be of no help to any adversary. Using obfuscation-based techniques we show how to instantiate the random oracle in various cryptographic constructions. Amongst others, we obtain the first standard model (i.e., without random oracles) candidate construction for a universal hardcore function with long output, a q-query correlated-input secure hash function, and q-query IND-secure deterministic public-key encryption. We obtain our positive results by instantiating various forms of universal computational extractors. The universal computational extractor (UCE) framework was introduced by Bellare, Hoang, and Keelveedhi (CRYPTO, 2013) to provide (very strong) standard-model notions of hash functions that allow instantiating random oracles in a wide range of applications. Intriguingly, even though obfuscation allows us to show how to replace random oracles in certain situations, it also allows us to show limitations of the random oracle methodology as well as of UCEs. Using obfuscation-based techniques, we prove that several concrete UCE assumptions (including all originally proposed assumptions) cannot hold in case indistinguishability obfuscation exists. (We note that these negative results inspired the weaker UCE notions that lay at the core of our positive constructions.) Assuming the existence of indistinguishability obfuscation also allows us to extend the uninstantiability techniques of Canetti et al. (STOC, 1998) and show that a large class of random-oracle transformations are not sound. This affects the Encrypt-with-Hash transformation (Bellare et al.; CRYPTO, 2007) to construct deterministic public-key encryption, as well as the widely used Fujisaki--Okamoto transformation (Fujisaki, Okamoto; CRYPTO, 1999) which transforms weak public-key encryption schemes into strong public-key encryption schemes. An often repeated criticism of random-oracle uninstantiability results is that the schemes only fail to be secure because they are designed to do so and, furthermore, their artificial design conflicts good cryptographic practice (see, for example, Koblitz and Menezes; Journal of Cryptology, 2007). Similar criticism can be voiced also for our counterexamples to the general applicability of the above mentioned random-oracle transformations. While this does not refute the mathematical validity of such uninstantiability results, we do, however, also present a very different counterexample to the soundness of the random oracle methodology: we show that if indistinguishability obfuscation exists, then a strong variant of point-function obfuscation (which can be similarly interpreted as a strong form of symmetric encryption) cannot be achieved without the help of random oracles while at the same time there are simple and elegant constructions in the random oracle model. We note that the same holds also for our negative results for UCEs. In summary, we develop techniques to work with obfuscation which allow us to show that the existence of indistinguishability obfuscation implies that various random oracle techniques may lead to insecure schemes. Our results suggest, once again, that we should be careful with random oracle proofs and we hope that they spark further research to overcome the necessity to use random oracles in the first place. Concerning the latter, we make first steps by proposing new and widely applicable UCE notions together with standard-model candidate constructions showing that UCEs may, indeed, be a viable alternative to the use of random oracles
    corecore