1,104 research outputs found

    Limits to Non-Malleability

    Get PDF
    There have been many successes in constructing explicit non-malleable codes for various classes of tampering functions in recent years, and strong existential results are also known. In this work we ask the following question: When can we rule out the existence of a non-malleable code for a tampering class ?? First, we start with some classes where positive results are well-known, and show that when these classes are extended in a natural way, non-malleable codes are no longer possible. Specifically, we show that no non-malleable codes exist for any of the following tampering classes: - Functions that change d/2 symbols, where d is the distance of the code; - Functions where each input symbol affects only a single output symbol; - Functions where each of the n output bits is a function of n-log n input bits. Furthermore, we rule out constructions of non-malleable codes for certain classes ? via reductions to the assumption that a distributional problem is hard for ?, that make black-box use of the tampering functions in the proof. In particular, this yields concrete obstacles for the construction of efficient codes for NC, even assuming average-case variants of P ? NC

    Non-Malleable Codes for Small-Depth Circuits

    Get PDF
    We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e. AC0\mathsf{AC^0} tampering functions), our codes have codeword length n=k1+o(1)n = k^{1+o(1)} for a kk-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length 2O(k)2^{O(\sqrt{k})}. Our construction remains efficient for circuit depths as large as Θ(log⁥(n)/log⁥log⁥(n))\Theta(\log(n)/\log\log(n)) (indeed, our codeword length remains n≀k1+Ï”)n\leq k^{1+\epsilon}), and extending our result beyond this would require separating P\mathsf{P} from NC1\mathsf{NC^1}. We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure

    Black-Box Uselessness: Composing Separations in Cryptography

    Get PDF
    Black-box separations have been successfully used to identify the limits of a powerful set of tools in cryptography, namely those of black-box reductions. They allow proving that a large set of techniques are not capable of basing one primitive ? on another ?. Such separations, however, do not say anything about the power of the combination of primitives ??,?? for constructing ?, even if ? cannot be based on ?? or ?? alone. By introducing and formalizing the notion of black-box uselessness, we develop a framework that allows us to make such conclusions. At an informal level, we call primitive ? black-box useless (BBU) for ? if ? cannot help constructing ? in a black-box way, even in the presence of another primitive ?. This is formalized by saying that ? is BBU for ? if for any auxiliary primitive ?, whenever there exists a black-box construction of ? from (?,?), then there must already also exist a black-box construction of ? from ? alone. We also formalize various other notions of black-box uselessness, and consider in particular the setting of efficient black-box constructions when the number of queries to ? is below a threshold. Impagliazzo and Rudich (STOC\u2789) initiated the study of black-box separations by separating key agreement from one-way functions. We prove a number of initial results in this direction, which indicate that one-way functions are perhaps also black-box useless for key agreement. In particular, we show that OWFs are black-box useless in any construction of key agreement in either of the following settings: (1) the key agreement has perfect correctness and one of the parties calls the OWF a constant number of times; (2) the key agreement consists of a single round of interaction (as in Merkle-type protocols). We conjecture that OWFs are indeed black-box useless for general key agreement. We also show that certain techniques for proving black-box separations can be lifted to the uselessness regime. In particular, we show that the lower bounds of Canetti, Kalai, and Paneth (TCC\u2715) as well as Garg, Mahmoody, and Mohammed (Crypto\u2717 & TCC\u2717) for assumptions behind indistinguishability obfuscation (IO) can be extended to derive black-box uselessness of a variety of primitives for obtaining (approximately correct) IO. These results follow the so-called "compiling out" technique, which we prove to imply black-box uselessness. Eventually, we study the complementary landscape of black-box uselessness, namely black-box helpfulness. We put forth the conjecture that one-way functions are black-box helpful for building collision-resistant hash functions. We define two natural relaxations of this conjecture, and prove that both of these conjectures are implied by a natural conjecture regarding random permutations equipped with a collision finder oracle, as defined by Simon (Eurocrypt\u2798). This conjecture may also be of interest in other contexts, such as amplification of hardness

    A Study of Separations in Cryptography: New Results and New Models

    Get PDF
    For more than 20 years, black-box impossibility results have been used to argue the infeasibility of constructing certain cryptographic primitives (e.g., key agreement) from others (e.g., one-way functions). In this dissertation we further extend the frontier of this field by demonstrating several new impossibility results as well as a new framework for studying a more general class of constructions. Our first two results demonstrate impossibility of black-box constructions of two commonly used cryptographic primitives. In our first result we study the feasibility of black-box constructions of predicate encryption schemes from standard assumptions and demonstrate strong limitations on the types of schemes that can be constructed. In our second result we study black-box constructions of constant-round zero-knowledge proofs from one-way permutations and show that, under commonly believed complexity assumptions, no such constructions exist. A widely recognized limitation of black-box impossibility results, however, is that they say nothing about the usefulness of (known) non-black-box techniques. This state of affairs is unsatisfying as we would at least like to rule out constructions using the set of techniques we have at our disposal. With this motivation in mind, in the final result of this dissertation we propose a new framework for black-box constructions with a non-black-box flavor, specifically, those that rely on zero-knowledge proofs relative to some oracle. Our framework is powerful enough to capture a large class of known constructions, however we show that the original black-box separation of key agreement from one-way functions still holds even in this non-black-box setting that allows for zero-knowledge proofs

    On the Impossibility of Basing Identity Based Encryption on Trapdoor Permutations

    Full text link
    We ask whether an Identity Based Encryption (IBE) sys-tem can be built from simpler public-key primitives. We show that there is no black-box construction of IBE from Trapdoor Permutations (TDP) or even from Chosen Ci-phertext Secure Public Key Encryption (CCA-PKE). These black-box separation results are based on an essential prop-erty of IBE, namely that an IBE system is able to compress exponentially many public-keys into a short public parame-ters string. 1

    On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

    Get PDF
    The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: - The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. - The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms

    IST Austria Thesis

    Get PDF
    Many security definitions come in two flavors: a stronger “adaptive” flavor, where the adversary can arbitrarily make various choices during the course of the attack, and a weaker “selective” flavor where the adversary must commit to some or all of their choices a-priori. For example, in the context of identity-based encryption, selective security requires the adversary to decide on the identity of the attacked party at the very beginning of the game whereas adaptive security allows the attacker to first see the master public key and some secret keys before making this choice. Often, it appears to be much easier to achieve selective security than it is to achieve adaptive security. A series of several recent works shows how to cleverly achieve adaptive security in several such scenarios including generalized selective decryption [Pan07][FJP15], constrained PRFs [FKPR14], and Yao’s garbled circuits [JW16]. Although the above works expressed vague intuition that they share a common technique, the connection was never made precise. In this work we present a new framework (published at Crypto ’17 [JKK+17a]) that connects all of these works and allows us to present them in a unified and simplified fashion. Having the framework in place, we show how to achieve adaptive security for proxy re-encryption schemes (published at PKC ’19 [FKKP19]) and provide the first adaptive security proofs for continuous group key agreement protocols (published at S&P ’21 [KPW+21]). Questioning optimality of our framework, we then show that currently used proof techniques cannot lead to significantly better security guarantees for "graph-building" games (published at TCC ’21 [KKPW21a]). These games cover generalized selective decryption, as well as the security of prominent constructions for constrained PRFs, continuous group key agreement, and proxy re-encryption. Finally, we revisit the adaptive security of Yao’s garbled circuits and extend the analysis of Jafargholi and Wichs in two directions: While they prove adaptive security only for a modified construction with increased online complexity, we provide the first positive results for the original construction by Yao (published at TCC ’21 [KKP21a]). On the negative side, we prove that the results of Jafargholi and Wichs are essentially optimal by showing that no black-box reduction can provide a significantly better security bound (published at Crypto ’21 [KKPW21c])

    Random Oracles in a Quantum World

    Get PDF
    The interest in post-quantum cryptography - classical systems that remain secure in the presence of a quantum adversary - has generated elegant proposals for new cryptosystems. Some of these systems are set in the random oracle model and are proven secure relative to adversaries that have classical access to the random oracle. We argue that to prove post-quantum security one needs to prove security in the quantum-accessible random oracle model where the adversary can query the random oracle with quantum states. We begin by separating the classical and quantum-accessible random oracle models by presenting a scheme that is secure when the adversary is given classical access to the random oracle, but is insecure when the adversary can make quantum oracle queries. We then set out to develop generic conditions under which a classical random oracle proof implies security in the quantum-accessible random oracle model. We introduce the concept of a history-free reduction which is a category of classical random oracle reductions that basically determine oracle answers independently of the history of previous queries, and we prove that such reductions imply security in the quantum model. We then show that certain post-quantum proposals, including ones based on lattices, can be proven secure using history-free reductions and are therefore post-quantum secure. We conclude with a rich set of open problems in this area.Comment: 38 pages, v2: many substantial changes and extensions, merged with a related paper by Boneh and Zhandr
    • 

    corecore