168 research outputs found

    Incremental Program Obfuscation

    Get PDF
    Recent advances in program obfuscation suggest that it is possible to create software that can provably safeguard secret information. However, software systems usually contain large executable code that is updated multiple times and sometimes very frequently. Freshly obfuscating the program for every small update will lead to a considerable efficiency loss. Thus, an extremely desirable property for obfuscation algorithms is incrementality: small changes to the underlying program translate into small changes to the corresponding obfuscated program. We initiate a thorough investigation of incremental program obfuscation. We show that the strong simulation-based notions of program obfuscation, such as ``virtual black-box\u27\u27 and ``virtual grey-box\u27\u27 obfuscation, cannot be incremental (according to our efficiency requirements) even for very simple functions such as point functions. We then turn to the indistinguishability-based notions, and present two security definitions of varying strength --- namely, a weak one and a strong one. To understand the overall strength of our definitions, we formulate the notion of incremental best-possible obfuscation and show that it is equivalent to our strong indistinguishability-based notion. Finally, we present constructions for incremental program obfuscation satisfying both our security notions. We first give a construction achieving the weaker security notion based on the existence of general purpose indistinguishability obfuscation. Next, we present a generic transformation using oblivious RAM to amplify security from weaker to stronger, while maintaining the incrementality property

    On the Orthogonal Vector Problem and the Feasibility of Unconditionally Secure Leakage-Resilient Computation

    Get PDF
    We consider unconditionally secure leakage resilient two-party computation, where security means that the leakage obtained by an adversary can be simulated using a similar amount of leakage from the private inputs or outputs. A related problem is known as circuit compilation, where there is only one device doing a computation on public input and output. Here the goal is to ensure that the adversary learns only the input/output behaviour of the computation, even given leakage from the internal state of the device. We study these problems in an enhanced version of the ``only computation leaks\u27\u27 model, where the adversary is additionally allowed a bounded amount of {\em global} leakage from the state of the entity under attack. In this model, we show the first unconditionally secure leakage resilient two-party computation protocol. The protocol assumes access to correlated randomness in the form of a functionality \fOrt that outputs pairs of orthogonal vectors (u,v)(\vec{u}, \vec{v}) over some finite field, where the adversary can leak independently from u\vec{u} and from v\vec{v}. We also construct a general circuit compiler secure in the same leakage model. Our constructions work, even if the adversary is allowed to corrupt a constant fraction of the calls to \fOrt and decide which vectors should be output. On the negative side, we show that unconditionally secure two-party computation and circuit compilation are in general impossible in the plain version of our model. For circuit compilation we need a computational assumption to exhibit a function that cannot be securely computed, on the other hand impossibility holds even if global leakage is not allowed. It follows that even a somewhat unreliable version of \fOrt cannot be implemented with unconditional security in the plain leakage model, using classical communication. However, we show that an implementation using quantum communication does exist. In particular, we propose a simple ``prepare-and-measure\u27\u27 type protocol which we show secure using a new result on sampling from a quantum population. Although the protocol may produce a small number of incorrect pairs, this is sufficient for leakage resilient computation by our other results

    On the Complexity of Compressing Obfuscation

    Get PDF
    Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct. In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al.~(PKC 2016) and Bitansky et al.~(TCC 2016) by allowing for a broader regime of parameters. We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation. First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives. Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions. Lastly, we characterize the existence of compressing obfuscation with \emph{statistical} security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory

    Weakly Extractable One-Way Functions

    Get PDF
    A family of one-way functions is extractable if given a random function in the family, an efficient adversary can only output an element in the image of the function if it knows a corresponding preimage. This knowledge extraction guarantee is particularly powerful since it does not require interaction. However, extractable one-way functions (EFs) are subject to a strong barrier: assuming indistinguishability obfuscation, no EF can have a knowledge extractor that works against all polynomial-size non-uniform adversaries. This holds even for non-black-box extractors that use the adversary’s code. Accordingly, the literature considers either EFs based on non-falsifiable knowledge assumptions, where the extractor is not explicitly given, but it is only assumed to exist, or EFs against a restricted class of adversaries with a bounded non-uniform advice. This falls short of cryptography’s gold standard of security that requires an explicit reduction against non-uniform adversaries of arbitrary polynomial size. Motivated by this gap, we put forward a new notion of weakly extractable one-way functions (WEFs) that circumvents the known barrier. We then prove that WEFs are inextricably connected to the long standing question of three-message zero knowledge protocols. We show that different flavors of WEFs are sufficient and necessary for three-message zero knowledge to exist. The exact flavor depends on whether the protocol is computational or statistical zero knowledge and whether it is publicly or privately verifiable. Combined with recent progress on constructing three message zero-knowledge, we derive a new connection between keyless multi-collision resistance and the notion of incompressibility and the feasibility of non-interactive knowledge extraction. Another interesting corollary of our result is that in order to construct three-message zero knowledge arguments, it suffices to construct such arguments where the honest prover strategy is unbounded

    Fully Deniable Interactive Encryption

    Get PDF
    Deniable encryption (Canetti et al., Crypto 1996) enhances secret communication over public channels, providing the additional guarantee that the secrecy of communication is protected even if the parties are later coerced (or willingly bribed) to expose their entire internal states: plaintexts, keys and randomness. To date, constructions of deniable encryption --- and more generally, interactive deniable communication --- only address restricted cases where only one party is compromised (Sahai and Waters, STOC 2014). The main question --- whether deniable communication is at all possible if both parties are coerced at once --- has remained open. We resolve this question in the affirmative, presenting a communication protocol that is fully deniable under coercion of both parties. Our scheme has three rounds, assumes subexponentially secure indistinguishability obfuscation and one-way functions, and uses a short global reference string that is generated once at system set-up and suffices for an unbounded number of encryptions and decryptions. Of independent interest, we introduce a new notion called off-the-record deniability, which protects parties even when their claimed internal states are inconsistent (a case not covered by prior definitions). Our scheme satisfies both standard deniability and off-the-record deniability

    Virtual Grey-Boxes Beyond Obfuscation: A Statistical Security Notion for Cryptographic Agents

    Get PDF
    We extend the simulation-based definition of Virtual Grey Box (VGB) security -- originally proposed for obfuscation (Bitansky and Canetti, 2010) -- to a broad class of cryptographic primitives. These include functional encryption, graded encoding schemes, bi-linear maps (with uber assumptions), as well as unexplored ones like homomorphic functional encryption. Our main result is a characterization of VGB security, in all these cases, in terms of an indistinguishability-preserving notion of security, called ΓsINDPRE\Gamma^*-s-\textsf{IND}-\textsf{PRE} security, formulated using an extension of the recently proposed Cryptographic Agents framework (Agrawal et al., 2015). We further show that this definition is equivalent to an indistinguishability based security definition that is restricted to \u27concentrated\u27 distributions (wherein the outcome of any computation on encrypted data is essentially known ahead of the computation). A result of Bitansky et al. (2014), who showed that VGB obfuscation is equivalent to strong indistinguishability obfuscation (SIO), is obtained by specializing our result to obfuscation. Our proof, while sharing various elements from the proof of Bitansky et al., is simpler and significantly more general, as it uses ΓsINDPRE\Gamma^*-s-\textsf{IND}-\textsf{PRE} security as an intermediate notion. Our characterization also shows that the semantic security for graded encoding schemes (Pass et al. 2014), is in fact an instance of this same definition. We also present a composition theorem for rtestfamily-sINDPRE security. We can then recover the result of Bitansky et al. (2014) regarding the existence of VGB obfuscation for all NC1 circuits, simply by instantiating this composition theorem with a reduction from obfuscation of NC1 circuits to graded encoding schemas (Barak et al., 2014) and the assumption that there exists an ΓsINDPRE\Gamma^*-s-\textsf{IND}-\textsf{PRE} secure scheme for the graded encoding schema (Pass et al. 2014)

    Leakage Resilient Secret Sharing and Applications

    Get PDF
    A secret sharing scheme allows a dealer to share a secret among a set of nn parties such that any authorized subset of the parties can recover the secret, while any unauthorized subset of the parties learns no information about the secret. A local leakage-resilient secret sharing scheme (introduced in independent works by (Goyal and Kumar, STOC 18) and (Benhamouda, Degwekar, Ishai and Rabin, Crypto 18)) additionally requires the secrecy to hold against every unauthorized set of parties even if they obtain some bounded local leakage from every other share. The leakage is said to be local if it is computed independently for each share. So far, the only known constructions of local leakage resilient secret sharing schemes are for threshold access structures for very low (O(1)O(1)) or very high (no(logn)n -o(\log n)) thresholds. In this work, we give a compiler that takes a secret sharing scheme for any monotone access structure and produces a local leakage resilient secret sharing scheme for the same access structure, with only a constant-factor blow-up in the sizes of the shares. Furthermore, the resultant secret sharing scheme has optimal leakage-resilience rate i.e., the ratio between the leakage tolerated and the size of each share can be made arbitrarily close to 11. Using this secret sharing scheme as the main building block, we obtain the following results: 1. Rate Preserving Non-Malleable Secret Sharing: We give a compiler that takes any secret sharing scheme for a 4-monotone access structure with rate RR and converts it into a non-malleable secret sharing scheme for the same access structure with rate Ω(R)\Omega(R). The prior such non-zero rate construction (Badrinarayanan and Srinivasan, 18) only achieves a rate of Θ(R/tmaxlog2n)\Theta(R/{t_{\max}\log^2 n}), where tmaxt_{\max} is the maximum size of any minimal set in the access structure. As a special case, for any threshold t4t \geq 4 and an arbitrary ntn \geq t, we get the first constant rate construction of tt-out-of-nn non-malleable secret sharing. 2. Leakage-Tolerant Multiparty Computation for General Interaction Pattern: For any function, we give a reduction from constructing leakage-tolerant secure multi-party computation protocols obeying any interaction pattern to constructing a secure (and not necessarily leakage-tolerant) protocol for a related function obeying the star interaction pattern. This improves upon the result of (Halevi et al., ITCS 2016), who constructed a protocol that is secure in a leak-free environment

    Incrementally Verifiable Computation via Incremental PCPs

    Get PDF
    If I commission a long computation, how can I check that the result is correct without re-doing the computation myself? This is the question that efficient verifiable computation deals with. In this work, we address the issue of verifying the computation as it unfolds. That is, at any intermediate point in the computation, I would like to see a proof that the current state is correct. Ideally, these proofs should be short, non-interactive, and easy to verify. In addition, the proof at each step should be generated efficiently by updating the previous proof, without recomputing the entire proof from scratch. This notion, known as incrementally verifiable computation, was introduced by Valiant [TCC 08] about a decade ago. Existing solutions follow the approach of recursive proof composition and can be based on strong and non-falsifiable cryptographic assumptions (so-called ``knowledge assumptions\u27\u27). In this work, we present a new framework for constructing incrementally verifiable computation schemes in both the publicly verifiable and designated-verifier settings. Our designated-verifier scheme is based on somewhat homomorphic encryption (which can be based on Learning with Errors) and our publicly verifiable scheme is based on the notion of zero-testable homomorphic encryption, which can be constructed from ideal multi-linear maps [Paneth and Rothblum, TCC 17]. Our framework is anchored around the new notion of a probabilistically checkable proof (PCP) with incremental local updates. An incrementally updatable PCP proves the correctness of an ongoing computation, where after each computation step, the value of every symbol can be updated locally without reading any other symbol. This update results in a new PCP for the correctness of the next step in the computation. Our primary technical contribution is constructing such an incrementally updatable PCP. We show how to combine updatable PCPs with recently suggested (ordinary) verifiable computation to obtain our results

    Predictable arguments of knowledge

    Get PDF
    We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK). Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality. We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructions of PAoK relying on different cryptographic tools, and discuss applications to cryptography

    Combiners for Functional Encryption, Unconditionally

    Get PDF
    Functional encryption (FE) combiners allow one to combine many candidates for a functional encryption scheme, possibly based on different computational assumptions, into another functional encryption candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. The fundamental question in this area is whether FE combiners exist. There have been a series of works (Ananth et. al. (CRYPTO \u2716), Ananth-Jain-Sahai (EUROCRYPT \u2717), Ananth et. al (TCC \u2719)) on constructing FE combiners from various assumptions. We give the first unconditional construction of combiners for functional encryption, resolving this question completely. Our construction immediately implies an unconditional universal functional encryption scheme, an FE scheme that is secure if such an FE scheme exists. Previously such results either relied on algebraic assumptions or required subexponential security assumptions
    corecore