629 research outputs found

    On the Complexity of Collision Resistant Hash Functions: New and Old Black-Box Separations

    Get PDF
    The complexity of collision-resistant hash functions has been long studied in the theory of cryptography. While we often think about them as a Minicrypt primitive, black-box separations demonstrate that constructions from one-way functions are unlikely. Indeed, theoretical constructions of collision-resistant hash functions are based on rather structured assumptions. We make two contributions to this study: 1. A New Separation: We show that collision-resistant hashing does not imply hard problems in the class Statistical Zero Knowledge in a black-box way. 2. New Proofs: We show new proofs for the results of Simon, ruling out black-box reductions of collision-resistant hashing to one-way permutations, and of Asharov and Segev, ruling out black-box reductions to indistinguishability obfuscation. The new proofs are quite different from the previous ones and are based on simple coupling arguments

    Black-Box Uselessness: Composing Separations in Cryptography

    Get PDF
    Black-box separations have been successfully used to identify the limits of a powerful set of tools in cryptography, namely those of black-box reductions. They allow proving that a large set of techniques are not capable of basing one primitive ? on another ?. Such separations, however, do not say anything about the power of the combination of primitives ??,?? for constructing ?, even if ? cannot be based on ?? or ?? alone. By introducing and formalizing the notion of black-box uselessness, we develop a framework that allows us to make such conclusions. At an informal level, we call primitive ? black-box useless (BBU) for ? if ? cannot help constructing ? in a black-box way, even in the presence of another primitive ?. This is formalized by saying that ? is BBU for ? if for any auxiliary primitive ?, whenever there exists a black-box construction of ? from (?,?), then there must already also exist a black-box construction of ? from ? alone. We also formalize various other notions of black-box uselessness, and consider in particular the setting of efficient black-box constructions when the number of queries to ? is below a threshold. Impagliazzo and Rudich (STOC\u2789) initiated the study of black-box separations by separating key agreement from one-way functions. We prove a number of initial results in this direction, which indicate that one-way functions are perhaps also black-box useless for key agreement. In particular, we show that OWFs are black-box useless in any construction of key agreement in either of the following settings: (1) the key agreement has perfect correctness and one of the parties calls the OWF a constant number of times; (2) the key agreement consists of a single round of interaction (as in Merkle-type protocols). We conjecture that OWFs are indeed black-box useless for general key agreement. We also show that certain techniques for proving black-box separations can be lifted to the uselessness regime. In particular, we show that the lower bounds of Canetti, Kalai, and Paneth (TCC\u2715) as well as Garg, Mahmoody, and Mohammed (Crypto\u2717 & TCC\u2717) for assumptions behind indistinguishability obfuscation (IO) can be extended to derive black-box uselessness of a variety of primitives for obtaining (approximately correct) IO. These results follow the so-called "compiling out" technique, which we prove to imply black-box uselessness. Eventually, we study the complementary landscape of black-box uselessness, namely black-box helpfulness. We put forth the conjecture that one-way functions are black-box helpful for building collision-resistant hash functions. We define two natural relaxations of this conjecture, and prove that both of these conjectures are implied by a natural conjecture regarding random permutations equipped with a collision finder oracle, as defined by Simon (Eurocrypt\u2798). This conjecture may also be of interest in other contexts, such as amplification of hardness

    Separating Two-Round Secure Computation From Oblivious Transfer

    Get PDF
    We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions. Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT. As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT

    On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions

    Get PDF
    Constructing one-way functions based on NP-hardness is a central challenge in theoretical computer science. Unfortunately, Akavia et al. [Akavia et al., 2006] presented strong evidence that a nonadaptive black-box (BB) reduction is insufficient to solve this challenge. However, should we give up such a central proof technique even for an intermediate step? In this paper, we turn our eyes from standard cryptographic primitives to weaker cryptographic primitives allowed to take auxiliary-input and continue to explore the capability of nonadaptive BB reductions to base auxiliary-input primitives on NP-hardness. Specifically, we prove the followings: - if we base an auxiliary-input pseudorandom generator (AIPRG) on NP-hardness via a nonadaptive BB reduction, then the polynomial hierarchy collapses; - if we base an auxiliary-input one-way function (AIOWF) or auxiliary-input hitting set generator (AIHSG) on NP-hardness via a nonadaptive BB reduction, then an (i.o.-)one-way function also exists based on NP-hardness (via an adaptive BB reduction). These theorems extend our knowledge on nonadaptive BB reductions out of the current worst-to-average framework. The first result provides new evidence that nonadaptive BB reductions are insufficient to base AIPRG on NP-hardness. The second result also yields a weaker but still surprising consequence of nonadaptive BB reductions, i.e., a one-way function based on NP-hardness. In fact, the second result is interpreted in the following two opposite ways. Pessimistically, it shows that basing AIOWF or AIHSG on NP-hardness via nonadaptive BB reductions is harder than constructing a one-way function based on NP-hardness, which can be regarded as a negative result. Note that AIHSG is a weak primitive implied even by the hardness of learning; thus, this pessimistic view provides conceptually stronger limitations than the currently known limitations on nonadaptive BB reductions. Optimistically, it offers a new hope: breakthrough construction of auxiliary-input primitives might also provide construction standard cryptographic primitives. This optimistic view enhances the significance of further investigation on constructing auxiliary-input or other intermediate cryptographic primitives instead of standard cryptographic primitives

    A Study of Separations in Cryptography: New Results and New Models

    Get PDF
    For more than 20 years, black-box impossibility results have been used to argue the infeasibility of constructing certain cryptographic primitives (e.g., key agreement) from others (e.g., one-way functions). In this dissertation we further extend the frontier of this field by demonstrating several new impossibility results as well as a new framework for studying a more general class of constructions. Our first two results demonstrate impossibility of black-box constructions of two commonly used cryptographic primitives. In our first result we study the feasibility of black-box constructions of predicate encryption schemes from standard assumptions and demonstrate strong limitations on the types of schemes that can be constructed. In our second result we study black-box constructions of constant-round zero-knowledge proofs from one-way permutations and show that, under commonly believed complexity assumptions, no such constructions exist. A widely recognized limitation of black-box impossibility results, however, is that they say nothing about the usefulness of (known) non-black-box techniques. This state of affairs is unsatisfying as we would at least like to rule out constructions using the set of techniques we have at our disposal. With this motivation in mind, in the final result of this dissertation we propose a new framework for black-box constructions with a non-black-box flavor, specifically, those that rely on zero-knowledge proofs relative to some oracle. Our framework is powerful enough to capture a large class of known constructions, however we show that the original black-box separation of key agreement from one-way functions still holds even in this non-black-box setting that allows for zero-knowledge proofs

    On the Efficiency of Classical and Quantum Secure Function Evaluation

    Full text link
    We provide bounds on the efficiency of secure one-sided output two-party computation of arbitrary finite functions from trusted distributed randomness in the statistical case. From these results we derive bounds on the efficiency of protocols that use different variants of OT as a black-box. When applied to implementations of OT, these bounds generalize most known results to the statistical case. Our results hold in particular for transformations between a finite number of primitives and for any error. In the second part we study the efficiency of quantum protocols implementing OT. While most classical lower bounds for perfectly secure reductions of OT to distributed randomness still hold in the quantum setting, we present a statistically secure protocol that violates these bounds by an arbitrarily large factor. We then prove a weaker lower bound that does hold in the statistical quantum setting and implies that even quantum protocols cannot extend OT. Finally, we present two lower bounds for reductions of OT to commitments and a protocol based on string commitments that is optimal with respect to both of these bounds

    Towards Non-Black-Box Separations of Public Key Encryption and One Way Function

    Get PDF
    Separating public key encryption from one way functions is one of the fundamental goals of complexity-based cryptography. Beginning with the seminal work of Impagliazzo and Rudich (STOC, 1989), a sequence of works have ruled out certain classes of reductions from public key encryption (PKE)---or even key agreement---to one way function. Unfortunately, known results---so called black-box separations---do not apply to settings where the construction and/or reduction are allowed to directly access the code, or circuit, of the one way function. In this work, we present a meaningful, non-black-box separation between public key encryption (PKE) and one way function. Specifically, we introduce the notion of BBN−\textsf{BBN}^- reductions (similar to the BBNp\textsf{BBN}\text{p} reductions of Baecher et al. (ASIACRYPT, 2013)), in which the construction EE accesses the underlying primitive in a black-box way, but wherein the universal reduction RR receives the efficient code/circuit of the underlying primitive as input and is allowed oracle access to the adversary Adv\textsf{Adv}. We additionally require that the number of oracle queries made to Adv\textsf{Adv}, and the success probability of RR are independent of the run-time/circuit size of the underlying primitive. We prove that there is no non-adaptive, BBN−\textsf{BBN}^- reduction from PKE to one way function, under the assumption that certain types of strong one way functions exist. Specifically, we assume that there exists a regular one way function ff such that there is no Arthur-Merlin protocol proving that ``z∉Range(f)z \not\in \textsf{Range}(f)\u27\u27, where soundness holds with high probability over ``no instances,\u27\u27 y∼f(Un)y \sim f(U_n), and Arthur may receive polynomial-sized, non-uniform advice. This assumption is related to the average-case analogue of the widely believed assumption coNP⊈NP/poly\textbf{coNP} \not\subseteq \textbf{NP}/\textbf{poly}

    Hardness vs. (Very Little) Structure in Cryptography: A Multi-Prover Interactive Proofs Perspective

    Get PDF
    The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich\u27s early work (PhD Thesis \u2788) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO \u2717). Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has NP\mathsf{NP}-verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes NP∩coNP\mathsf{NP} \cap \mathsf{coNP} or SZK\mathsf{SZK}, respectively, from a black-box perspective. In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class MIP∩coMIP\mathsf{MIP} \cap \mathsf{coMIP} from a black-box perspective. Conceptually, given that MIP=NEXP\mathsf{MIP} = \mathsf{NEXP}, our result rules out languages with very little structure. Additionally, we prove a similar result for collision-resistant hash functions, and more generally for any cryptographic primitive that exists relative to a random oracle. Already the cases of languages that have IP\mathsf{IP} or AM\mathsf{AM} proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of IP\mathsf{IP}, where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of AM\mathsf{AM}, where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have NP\mathsf{NP}-verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system

    Structure vs Hardness through the Obfuscation Lens

    Get PDF
    Much of modern cryptography, starting from public-key encryption and going beyond, is based on the hardness of structured (mostly algebraic) problems like factoring, discrete log, or finding short lattice vectors. While structure is perhaps what enables advanced applications, it also puts the hardness of these problems in question. In particular, this structure often puts them in low (and so called structured) complexity classes such as NP∩\capcoNP or statistical zero-knowledge (SZK). Is this structure really necessary? For some cryptographic primitives, such as one-way permutations and homomorphic encryption, we know that the answer is yes — they imply hard problems in NP∩\capcoNP and SZK, respectively. In contrast, one-way functions do not imply such hard problems, at least not by black-box reductions. Yet, for many basic primitives such as public-key encryption, oblivious transfer, and functional encryption, we do not have any answer. We show that the above primitives, and many others, do not imply hard problems in NP∩\capcoNP or SZK via black-box reductions. In fact, we first show that even the very powerful notion of Indistinguishability Obfuscation (IO) does not imply such hard problems, and then deduce the same for a large class of primitives that can be constructed from IO
    • …
    corecore