373 research outputs found

    Lower Bounds on Assumptions behind Indistinguishability Obfuscation

    Get PDF
    Since the seminal work of Garg et. al (FOCS\u2713) in which they proposed the first candidate construction for indistinguishability obfuscation (iO for short), iO has become a central cryptographic primitive with numerous applications. The security of the proposed construction of Garg et al. and its variants are proved based on multi-linear maps (Garg et. al Eurocrypt\u2713) and their idealized model called the graded encoding model (Brakerski and Rothblum TCC\u2714 and Barak et al. Eurocrypt\u2714). Whether or not iO could be based on standard and well-studied hardness assumptions has remain an elusive open question. In this work we prove emph{lower bounds} on the assumptions that imply iO in a black-box way, based on computational assumptions. Note that any lower bound for iO needs to somehow rely on computational assumptions, because if P = NP then statistically secure iO does exist. Our results are twofold: 1. There is no fully black-box construction of iO from (exponentially secure) collision-resistant hash functions unless the polynomial hierarchy collapses. Our lower bound extends to (separate iO from) any primitive implied by a random oracle in a black-box way. 2. Let P be any primitive that exists relative to random trapdoor permutations, the generic group model for any finite abelian group, or degree-O(1)O(1) graded encoding model for any finite ring. We show that achieving a black-box construction of iO from P is emph{as hard as} basing public-key cryptography on one-way functions. In particular, for any such primitive P we present a constructive procedure that takes any black-box construction of iO from P and turns it into a a construction of semantically secure public-key encryption form any one-way functions. Our separations hold even if the construction of iO from P is {semi-} black-box (Reingold, Trevisan, and Vadhan, TCC\u2704) and the security reduction could access the adversary in a non-black-box way

    Towards Separating Computational and Statistical Differential Privacy

    Full text link
    Computational differential privacy (CDP) is a natural relaxation of the standard notion of (statistical) differential privacy (SDP) proposed by Beimel, Nissim, and Omri (CRYPTO 2008) and Mironov, Pandey, Reingold, and Vadhan (CRYPTO 2009). In contrast to SDP, CDP only requires privacy guarantees to hold against computationally-bounded adversaries rather than computationally-unbounded statistical adversaries. Despite the question being raised explicitly in several works (e.g., Bun, Chen, and Vadhan, TCC 2016), it has remained tantalizingly open whether there is any task achievable with the CDP notion but not the SDP notion. Even a candidate such task is unknown. Indeed, it is even unclear what the truth could be! In this work, we give the first construction of a task achievable with the CDP notion but not the SDP notion, under the following strong but plausible cryptographic assumptions: (1) Non-Interactive Witness Indistinguishable Proofs, (2) Laconic Collision-Resistant Keyless Hash Functions, (3) Differing-Inputs Obfuscation for Public-Coin Samplers. In particular, we construct a task for which there exists an ε\varepsilon-CDP mechanism with ε=O(1)\varepsilon = O(1) achieving 1o(1)1-o(1) utility, but any (ε,δ)(\varepsilon, \delta)-SDP mechanism, including computationally-unbounded ones, that achieves a constant utility must use either a super-constant ε\varepsilon or an inverse-polynomially large δ\delta. To prove this, we introduce a new approach for showing that a mechanism satisfies CDP: first we show that a mechanism is "private" against a certain class of decision tree adversaries, and then we use cryptographic constructions to "lift" this into privacy against computationally bounded adversaries. We believe this approach could be useful to devise further tasks separating CDP from SDP.Comment: To appear at Foundations of Computer Science (FOCS) 2023. Changes compared to previous version: (1) The lower bound for SDP is now stronger in that it holds also for a certain inverse-polynomially large delta as opposed to only non-negligible delta, and (2) the presentation is cleaned u

    Privacy Games: Optimal User-Centric Data Obfuscation

    Full text link
    In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user's privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm

    Lower Bounds on Obfuscation from All-or-Nothing Encryption Primitives

    Get PDF
    Indistinguishability obfuscation (IO) enables many heretofore out-of-reach applications in cryptography. However, currently all known constructions of IO are based on multilinear maps which are poorly understood. Hence, tremendous research effort has been put towards basing obfuscation on better-understood computational assumptions. Recently, another path to IO has emerged through functional encryption [Anath and Jain, CRYPTO 2015; Bitansky and Vaikuntanathan, FOCS 2015] but such FE schemes currently are still based on multi-linear maps. In this work, we study whether IO could be based on other powerful encryption primitives. Separations for IO: We show that (assuming that the polynomial hierarchy does not collapse and one-way functions exist) IO cannot be constructed in a black-box manner from powerful all-or-nothing encryption primitives, such as witness encryption (WE), predicate encryption, and fully homomorphic encryption. What unifies these primitives is that they are of the ``all-or-nothing\u27\u27 form, meaning either someone has the ``right key\u27\u27 in which case they can decrypt the message fully, or they are not supposed to learn anything. Stronger Model for Separations: One might argue that fully black-box uses of the considered encryption primitives limit their power too much because these primitives can easily lead to non-black-box constructions if the primitive is used in a self-feeding fashion --- namely, code of the subroutines of the considered primitive could easily be fed as input to the subroutines of the primitive itself. In fact, several important results (e.g., the construction of IO from functional encryption) follow this very recipe. In light of this, we prove our impossibility results with respect to a stronger model than the fully black-box framework of Impagliazzo and Rudich (STOC\u2789) and Reingold, Trevisan, and Vadhan (TCC\u2704) where the non-black-box technique of self-feeding is actually allowed

    Indistinguishability Obfuscation: From Approximate to Exact

    Get PDF
    We show general transformations from subexponentially-secure approximate indistinguishability obfuscation (IO) where the obfuscated circuit agrees with the original circuit on a 1/2+ϵ fraction of inputs on a certain samplable distribution, into exact indistinguishability obfuscation where the obfuscated circuit and the original circuit agree on all inputs. As a step towards our results, which is of independent interest, we also obtain an approximate-to-exact transformation for functional encryption. At the core of our techniques is a method for “fooling” the obfuscator into giving us the correct answer, while preserving the indistinguishability-based security. This is achieved based on various types of secure computation protocols that can be obtained from different standard assumptions. Put together with the recent results of Canetti, Kalai and Paneth (TCC 2015), Pass and Shelat (TCC 2016), and Mahmoody, Mohammed and Nemathaji (TCC 2016), we show how to convert indistinguishability obfuscation schemes in various ideal models into exact obfuscation schemes in the plain model.National Science Foundation (U.S.) (Grant CNS-1350619)National Science Foundation (U.S.) (Grant CNS-1414119

    Foundations and applications of program obfuscation

    Full text link
    Code is said to be obfuscated if it is intentionally difficult for humans to understand. Obfuscating a program conceals its sensitive implementation details and protects it from reverse engineering and hacking. Beyond software protection, obfuscation is also a powerful cryptographic tool, enabling a variety of advanced applications. Ideally, an obfuscated program would hide any information about the original program that cannot be obtained by simply executing it. However, Barak et al. [CRYPTO 01] proved that for some programs, such ideal obfuscation is impossible. Nevertheless, Garg et al. [FOCS 13] recently suggested a candidate general-purpose obfuscator which is conjectured to satisfy a weaker notion of security called indistinguishability obfuscation. In this thesis, we study the feasibility and applicability of secure obfuscation: - What notions of secure obfuscation are possible and under what assumptions? - How useful are weak notions like indistinguishability obfuscation? Our first result shows that the applications of indistinguishability obfuscation go well beyond cryptography. We study the tractability of computing a Nash equilibrium vii of a game { a central problem in algorithmic game theory and complexity theory. Based on indistinguishability obfuscation, we construct explicit games where a Nash equilibrium cannot be found efficiently. We also prove the following results on the feasibility of obfuscation. Our starting point is the Garg at el. obfuscator that is based on a new algebraic encoding scheme known as multilinear maps [Garg et al. EUROCRYPT 13]. 1. Building on the work of Brakerski and Rothblum [TCC 14], we provide the first rigorous security analysis for obfuscation. We give a variant of the Garg at el. obfuscator and reduce its security to that of the multilinear maps. Specifically, modeling the multilinear encodings as ideal boxes with perfect security, we prove ideal security for our obfuscator. Our reduction shows that the obfuscator resists all generic attacks that only use the encodings' permitted interface and do not exploit their algebraic representation. 2. Going beyond generic attacks, we study the notion of virtual-gray-box obfusca- tion [Bitansky et al. CRYPTO 10]. This relaxation of ideal security is stronger than indistinguishability obfuscation and has several important applications such as obfuscating password protected programs. We formulate a security requirement for multilinear maps which is sufficient, as well as necessary for virtual-gray-box obfuscation. 3. Motivated by the question of basing obfuscation on ideal objects that are simpler than multilinear maps, we give a negative result showing that ideal obfuscation is impossible, even in the random oracle model, where the obfuscator is given access to an ideal random function. This is the first negative result for obfuscation in a non-trivial idealized model

    Black-Box Uselessness: Composing Separations in Cryptography

    Get PDF
    Black-box separations have been successfully used to identify the limits of a powerful set of tools in cryptography, namely those of black-box reductions. They allow proving that a large set of techniques are not capable of basing one primitive ? on another ?. Such separations, however, do not say anything about the power of the combination of primitives ??,?? for constructing ?, even if ? cannot be based on ?? or ?? alone. By introducing and formalizing the notion of black-box uselessness, we develop a framework that allows us to make such conclusions. At an informal level, we call primitive ? black-box useless (BBU) for ? if ? cannot help constructing ? in a black-box way, even in the presence of another primitive ?. This is formalized by saying that ? is BBU for ? if for any auxiliary primitive ?, whenever there exists a black-box construction of ? from (?,?), then there must already also exist a black-box construction of ? from ? alone. We also formalize various other notions of black-box uselessness, and consider in particular the setting of efficient black-box constructions when the number of queries to ? is below a threshold. Impagliazzo and Rudich (STOC\u2789) initiated the study of black-box separations by separating key agreement from one-way functions. We prove a number of initial results in this direction, which indicate that one-way functions are perhaps also black-box useless for key agreement. In particular, we show that OWFs are black-box useless in any construction of key agreement in either of the following settings: (1) the key agreement has perfect correctness and one of the parties calls the OWF a constant number of times; (2) the key agreement consists of a single round of interaction (as in Merkle-type protocols). We conjecture that OWFs are indeed black-box useless for general key agreement. We also show that certain techniques for proving black-box separations can be lifted to the uselessness regime. In particular, we show that the lower bounds of Canetti, Kalai, and Paneth (TCC\u2715) as well as Garg, Mahmoody, and Mohammed (Crypto\u2717 & TCC\u2717) for assumptions behind indistinguishability obfuscation (IO) can be extended to derive black-box uselessness of a variety of primitives for obtaining (approximately correct) IO. These results follow the so-called "compiling out" technique, which we prove to imply black-box uselessness. Eventually, we study the complementary landscape of black-box uselessness, namely black-box helpfulness. We put forth the conjecture that one-way functions are black-box helpful for building collision-resistant hash functions. We define two natural relaxations of this conjecture, and prove that both of these conjectures are implied by a natural conjecture regarding random permutations equipped with a collision finder oracle, as defined by Simon (Eurocrypt\u2798). This conjecture may also be of interest in other contexts, such as amplification of hardness

    Indistinguishability Obfuscation from Well-Founded Assumptions

    Get PDF
    In this work, we show how to construct indistinguishability obfuscation from subexponential hardness of four well-founded assumptions. We prove: Let τ(0,),δ(0,1),ϵ(0,1)\tau \in (0,\infty), \delta \in (0,1), \epsilon \in (0,1) be arbitrary constants. Assume sub-exponential security of the following assumptions, where λ\lambda is a security parameter, and the parameters ,k,n\ell,k,n below are large enough polynomials in λ\lambda: - The SXDH assumption on asymmetric bilinear groups of a prime order p=O(2λ)p = O(2^\lambda), - The LWE assumption over Zp\mathbb{Z}_{p} with subexponential modulus-to-noise ratio 2kϵ2^{k^\epsilon}, where kk is the dimension of the LWE secret, - The LPN assumption over Zp\mathbb{Z}_p with polynomially many LPN samples and error rate 1/δ1/\ell^\delta, where \ell is the dimension of the LPN secret, - The existence of a Boolean PRG in NC0\mathsf{NC}^0 with stretch n1+τn^{1+\tau}, Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists

    When does Functional Encryption Imply Obfuscation?

    Get PDF
    Realizing indistinguishablility obfuscation (IO) based on well-understood computational assumptions is an important open problem. Recently, realizing functional encryption (FE) has emerged as promising directing towards that goal. This is because: (1) compact single-key FE (where the functional secret-key is of length double the ciphertext length) is known to imply IO [Anath and Jain, CRYPTO 2015; Bitansky and Vaikuntanathan, FOCS 2015] and (2) several strong variants of single-key FE are known based on various standard computation assumptions. In this work, we study when FE can be used for obtaining IO. We show any single-key FE for function families with ``short\u27\u27 enough outputs (specifically the output is less than ciphertext length by a value at least ω(n+k)\omega(n + k), where nn is the message length and kk is the security parameter) is insufficient for IO even when non-black-box use of the underlying FE is allowed to some degree. Namely, our impossibility result holds even if we are allowed to plant FE sub-routines as gates inside the circuits for which functional secret-keys are issued (which is exactly how the known FE to IO constructions work). Complementing our negative result, we show that our condition of ``short\u27\u27 enough is almost tight. More specifically, we show that any compact single-key FE with functional secret-key output length strictly larger than ciphertext length is sufficient for IO. Furthermore, we show that non-black-box use of the underlying FE is necessary for such a construction, by ruling out any fully black-box construction of IO from FE even with arbitrary long output
    corecore