551 research outputs found

    On Statistically Secure Obfuscation with Approximate Correctness

    Get PDF
    Goldwasser and Rothblum (TCC \u2707) prove that statistical indistinguishability obfuscation (iO) cannot exist if the obfuscator must maintain perfect correctness (under a widely believed complexity theoretic assumption: NP⊈SZK⊆AM∩coAM\mathcal{NP} \not\subseteq \mathcal{SZK}\subseteq\mathcal{AM}\cap\mathbf{co}\mathcal{AM}). However, for many applications of iO, such as constructing public-key encryption from one-way functions (one of the main open problems in theoretical cryptography), approximate correctness is sufficient. It had been unknown thus far whether statistical approximate iO (saiO) can exist. We show that saiO does not exist, even for a minimal correctness requirement, if NP⊈AM∩coAM\mathcal{NP} \not\subseteq \mathcal{AM}\cap\mathbf{co}\mathcal{AM}, and if one-way functions exist. A simple complementary observation shows that if one-way functions do not exist, then average-case saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a baseline for the obfuscated program. We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build public-key encryption from one-way functions and thus deserve further investigation

    Order-Revealing Encryption and the Hardness of Private Learning

    Full text link
    An order-revealing encryption scheme gives a public procedure by which two ciphertexts can be compared to reveal the ordering of their underlying plaintexts. We show how to use order-revealing encryption to separate computationally efficient PAC learning from efficient (ϵ,δ)(\epsilon, \delta)-differentially private PAC learning. That is, we construct a concept class that is efficiently PAC learnable, but for which every efficient learner fails to be differentially private. This answers a question of Kasiviswanathan et al. (FOCS '08, SIAM J. Comput. '11). To prove our result, we give a generic transformation from an order-revealing encryption scheme into one with strongly correct comparison, which enables the consistent comparison of ciphertexts that are not obtained as the valid encryption of any message. We believe this construction may be of independent interest.Comment: 28 page

    Synergy Between Circuit Obfuscation and Circuit Minimization

    Get PDF
    We study close connections between Indistinguishability Obfuscation (IO) and the Minimum Circuit Size Problem (MCSP), and argue that efficient algorithms/construction for MCSP and IO create a synergy. Some of our main results are: - If there exists a perfect (imperfect) IO that is computationally secure against nonuniform polynomial-size circuits, then for all k ? ?: NP ? ZPP^{MCSP} ? SIZE[n^k] (MA ? ZPP^{MCSP} ? SIZE[n^k]). - In addition, if there exists a perfect IO that is computationally secure against nonuniform polynomial-size circuits, then NEXP ? ZPEXP^{MCSP} ? P/poly. - If MCSP ? BPP, then statistical security and computational security for IO are equivalent. - If computationally-secure perfect IO exists, then MCSP ? BPP iff NP = ZPP. - If computationally-secure perfect IO exists, then ZPEXP ? BPP. To the best of our knowledge, this is the first consequence of strong circuit lower bounds from the existence of an IO. The results are obtained via a construction of an optimal universal distinguisher, computable in randomized polynomial time with access to the MCSP oracle, that will distinguish any two circuit-samplable distributions with the advantage that is the statistical distance between these two distributions minus some negligible error term. This is our main technical contribution. As another immediate application, we get a simple proof of the result by Allender and Das (Inf. Comput., 2017) that SZK ? BPP^{MCSP}

    Foundations and applications of program obfuscation

    Full text link
    Code is said to be obfuscated if it is intentionally difficult for humans to understand. Obfuscating a program conceals its sensitive implementation details and protects it from reverse engineering and hacking. Beyond software protection, obfuscation is also a powerful cryptographic tool, enabling a variety of advanced applications. Ideally, an obfuscated program would hide any information about the original program that cannot be obtained by simply executing it. However, Barak et al. [CRYPTO 01] proved that for some programs, such ideal obfuscation is impossible. Nevertheless, Garg et al. [FOCS 13] recently suggested a candidate general-purpose obfuscator which is conjectured to satisfy a weaker notion of security called indistinguishability obfuscation. In this thesis, we study the feasibility and applicability of secure obfuscation: - What notions of secure obfuscation are possible and under what assumptions? - How useful are weak notions like indistinguishability obfuscation? Our first result shows that the applications of indistinguishability obfuscation go well beyond cryptography. We study the tractability of computing a Nash equilibrium vii of a game { a central problem in algorithmic game theory and complexity theory. Based on indistinguishability obfuscation, we construct explicit games where a Nash equilibrium cannot be found efficiently. We also prove the following results on the feasibility of obfuscation. Our starting point is the Garg at el. obfuscator that is based on a new algebraic encoding scheme known as multilinear maps [Garg et al. EUROCRYPT 13]. 1. Building on the work of Brakerski and Rothblum [TCC 14], we provide the first rigorous security analysis for obfuscation. We give a variant of the Garg at el. obfuscator and reduce its security to that of the multilinear maps. Specifically, modeling the multilinear encodings as ideal boxes with perfect security, we prove ideal security for our obfuscator. Our reduction shows that the obfuscator resists all generic attacks that only use the encodings' permitted interface and do not exploit their algebraic representation. 2. Going beyond generic attacks, we study the notion of virtual-gray-box obfusca- tion [Bitansky et al. CRYPTO 10]. This relaxation of ideal security is stronger than indistinguishability obfuscation and has several important applications such as obfuscating password protected programs. We formulate a security requirement for multilinear maps which is sufficient, as well as necessary for virtual-gray-box obfuscation. 3. Motivated by the question of basing obfuscation on ideal objects that are simpler than multilinear maps, we give a negative result showing that ideal obfuscation is impossible, even in the random oracle model, where the obfuscator is given access to an ideal random function. This is the first negative result for obfuscation in a non-trivial idealized model

    Chaotic Compilation for Encrypted Computing: Obfuscation but Not in Name

    Get PDF
    An `obfuscation' for encrypted computing is quantified exactly here, leading to an argument that security against polynomial-time attacks has been achieved for user data via the deliberately `chaotic' compilation required for security properties in that environment. Encrypted computing is the emerging science and technology of processors that take encrypted inputs to encrypted outputs via encrypted intermediate values (at nearly conventional speeds). The aim is to make user data in general-purpose computing secure against the operator and operating system as potential adversaries. A stumbling block has always been that memory addresses are data and good encryption means the encrypted value varies randomly, and that makes hitting any target in memory problematic without address decryption, yet decryption anywhere on the memory path would open up many easily exploitable vulnerabilities. This paper `solves (chaotic) compilation' for processors without address decryption, covering all of ANSI C while satisfying the required security properties and opening up the field for the standard software tool-chain and infrastructure. That produces the argument referred to above, which may also hold without encryption.Comment: 31 pages. Version update adds "Chaotic" in title and throughout paper, and recasts abstract and Intro and other sections of the text for better access by cryptologists. To the same end it introduces the polynomial time defense argument explicitly in the final section, having now set that denouement out in the abstract and intr

    Black-Box Uselessness: Composing Separations in Cryptography

    Get PDF
    Black-box separations have been successfully used to identify the limits of a powerful set of tools in cryptography, namely those of black-box reductions. They allow proving that a large set of techniques are not capable of basing one primitive ? on another ?. Such separations, however, do not say anything about the power of the combination of primitives ??,?? for constructing ?, even if ? cannot be based on ?? or ?? alone. By introducing and formalizing the notion of black-box uselessness, we develop a framework that allows us to make such conclusions. At an informal level, we call primitive ? black-box useless (BBU) for ? if ? cannot help constructing ? in a black-box way, even in the presence of another primitive ?. This is formalized by saying that ? is BBU for ? if for any auxiliary primitive ?, whenever there exists a black-box construction of ? from (?,?), then there must already also exist a black-box construction of ? from ? alone. We also formalize various other notions of black-box uselessness, and consider in particular the setting of efficient black-box constructions when the number of queries to ? is below a threshold. Impagliazzo and Rudich (STOC\u2789) initiated the study of black-box separations by separating key agreement from one-way functions. We prove a number of initial results in this direction, which indicate that one-way functions are perhaps also black-box useless for key agreement. In particular, we show that OWFs are black-box useless in any construction of key agreement in either of the following settings: (1) the key agreement has perfect correctness and one of the parties calls the OWF a constant number of times; (2) the key agreement consists of a single round of interaction (as in Merkle-type protocols). We conjecture that OWFs are indeed black-box useless for general key agreement. We also show that certain techniques for proving black-box separations can be lifted to the uselessness regime. In particular, we show that the lower bounds of Canetti, Kalai, and Paneth (TCC\u2715) as well as Garg, Mahmoody, and Mohammed (Crypto\u2717 & TCC\u2717) for assumptions behind indistinguishability obfuscation (IO) can be extended to derive black-box uselessness of a variety of primitives for obtaining (approximately correct) IO. These results follow the so-called "compiling out" technique, which we prove to imply black-box uselessness. Eventually, we study the complementary landscape of black-box uselessness, namely black-box helpfulness. We put forth the conjecture that one-way functions are black-box helpful for building collision-resistant hash functions. We define two natural relaxations of this conjecture, and prove that both of these conjectures are implied by a natural conjecture regarding random permutations equipped with a collision finder oracle, as defined by Simon (Eurocrypt\u2798). This conjecture may also be of interest in other contexts, such as amplification of hardness

    ODIN: Obfuscation-based privacy-preserving consensus algorithm for Decentralized Information fusion in smart device Networks

    Get PDF
    The large spread of sensors and smart devices in urban infrastructures are motivating research in the area of the Internet of Things (IoT) to develop new services and improve citizens’ quality of life. Sensors and smart devices generate large amounts of measurement data from sensing the environment, which is used to enable services such as control of power consumption or traffic density. To deal with such a large amount of information and provide accurate measurements, service providers can adopt information fusion, which given the decentralized nature of urban deployments can be performed by means of consensus algorithms. These algorithms allow distributed agents to (iteratively) compute linear functions on the exchanged data, and take decisions based on the outcome, without the need for the support of a central entity. However, the use of consensus algorithms raises several security concerns, especially when private or security critical information is involved in the computation. In this article we propose ODIN, a novel algorithm allowing information fusion over encrypted data. ODIN is a privacy-preserving extension of the popular consensus gossip algorithm, which prevents distributed agents from having direct access to the data while they iteratively reach consensus; agents cannot access even the final consensus value but can only retrieve partial information (e.g., a binary decision). ODIN uses efficient additive obfuscation and proxy re-encryption during the update steps and garbled circuits to make final decisions on the obfuscated consensus. We discuss the security of our proposal and show its practicability and efficiency on real-world resource-constrained devices, developing a prototype implementation for Raspberry Pi devices

    ODIN: Obfuscation-based privacy-preserving consensus algorithm for Decentralized Information fusion in smart device Networks

    Get PDF
    The large spread of sensors and smart devices in urban infrastructures are motivating research in the area of the Internet of Things (IoT) to develop new services and improve citizens’ quality of life. Sensors and smart devices generate large amounts of measurement data from sensing the environment, which is used to enable services such as control of power consumption or traffic density. To deal with such a large amount of information and provide accurate measurements, service providers can adopt information fusion, which given the decentralized nature of urban deployments can be performed by means of consensus algorithms. These algorithms allow distributed agents to (iteratively) compute linear functions on the exchanged data, and take decisions based on the outcome, without the need for the support of a central entity. However, the use of consensus algorithms raises several security concerns, especially when private or security critical information is involved in the computation. In this article we propose ODIN, a novel algorithm allowing information fusion over encrypted data. ODIN is a privacy-preserving extension of the popular consensus gossip algorithm, which prevents distributed agents from having direct access to the data while they iteratively reach consensus; agents cannot access even the final consensus value but can only retrieve partial information (e.g., a binary decision). ODIN uses efficient additive obfuscation and proxy re-encryption during the update steps and garbled circuits to make final decisions on the obfuscated consensus. We discuss the security of our proposal and show its practicability and efficiency on real-world resource-constrained devices, developing a prototype implementation for Raspberry Pi devices
    • …
    corecore