321 research outputs found

    Ad Hoc Multi-Input Functional Encryption

    Get PDF
    Consider sources that supply sensitive data to an aggregator. Standard encryption only hides the data from eavesdroppers, but using specialized encryption one can hope to hide the data (to the extent possible) from the aggregator itself. For flexibility and security, we envision schemes that allow sources to supply encrypted data, such that at any point a dynamically-chosen subset of sources can allow an agreed-upon joint function of their data to be computed by the aggregator. A primitive called multi-input functional encryption (MIFE), due to Goldwasser et al. (EUROCRYPT 2014), comes close, but has two main limitations: - it requires trust in a third party, who is able to decrypt all the data, and - it requires function arity to be fixed at setup time and to be equal to the number of parties. To drop these limitations, we introduce a new notion of ad hoc MIFE. In our setting, each source generates its own public key and issues individual, function-specific secret keys to an aggregator. For successful decryption, an aggregator must obtain a separate key from each source whose ciphertext is being computed upon. The aggregator could obtain multiple such secret-keys from a user corresponding to functions of varying arity. For this primitive, we obtain the following results: - We show that standard MIFE for general functions can be bootstrapped to ad hoc MIFE for free, i.e. without making any additional assumption. - We provide a direct construction of ad hoc MIFE for the inner product functionality based on the Learning with Errors (LWE) assumption. This yields the first construction of this natural primitive based on a standard assumption. At a technical level, our results are obtained by combining standard MIFE schemes and two-round secure multiparty computation (MPC) protocols in novel ways highlighting an interesting interplay between MIFE and two-round MPC

    Separating Two-Round Secure Computation From Oblivious Transfer

    Get PDF
    We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions. Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT. As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT

    MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture

    Get PDF
    Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results: - any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs; - assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - ? fraction of machines (for an arbitrarily small constant ?) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup. As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques

    On Foundations of Protecting Computations

    Get PDF
    Information technology systems have become indispensable to uphold our way of living, our economy and our safety. Failure of these systems can have devastating effects. Consequently, securing these systems against malicious intentions deserves our utmost attention. Cryptography provides the necessary foundations for that purpose. In particular, it provides a set of building blocks which allow to secure larger information systems. Furthermore, cryptography develops concepts and tech- niques towards realizing these building blocks. The protection of computations is one invaluable concept for cryptography which paves the way towards realizing a multitude of cryptographic tools. In this thesis, we contribute to this concept of protecting computations in several ways. Protecting computations of probabilistic programs. An indis- tinguishability obfuscator (IO) compiles (deterministic) code such that it becomes provably unintelligible. This can be viewed as the ultimate way to protect (deterministic) computations. Due to very recent research, such obfuscators enjoy plausible candidate constructions. In certain settings, however, it is necessary to protect probabilistic com- putations. The only known construction of an obfuscator for probabilistic programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and requires an indistinguishability obfuscator which satisfies extreme security guarantees. We improve this construction and thereby reduce the require- ments on the security of the underlying indistinguishability obfuscator. (Agrikola, Couteau, and Hofheinz, PKC, 2020) Protecting computations in cryptographic groups. To facilitate the analysis of building blocks which are based on cryptographic groups, these groups are often overidealized such that computations in the group are protected from the outside. Using such overidealizations allows to prove building blocks secure which are sometimes beyond the reach of standard model techniques. However, these overidealizations are subject to certain impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018 introduced the algebraic group model (AGM) as a relaxation which is closer to the standard model but in several aspects preserves the power of said overidealizations. However, their model still suffers from implausibilities. We develop a framework which allows to transport several security proofs from the AGM into the standard model, thereby evading the above implausi- bility results, and instantiate this framework using an indistinguishability obfuscator. (Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020) Protecting computations using compression. Perfect compression algorithms admit the property that the compressed distribution is truly random leaving no room for any further compression. This property is invaluable for several cryptographic applications such as “honey encryption” or password-authenticated key exchange. However, perfect compression algorithms only exist for a very small number of distributions. We relax the notion of compression and rigorously study the resulting notion which we call “pseudorandom encodings”. As a result, we identify various surprising connections between seemingly unrelated areas of cryptography. Particularly, we derive novel results for adaptively secure multi-party computation which allows for protecting computations in distributed settings. Furthermore, we instantiate the weakest version of pseudorandom encodings which suffices for adaptively secure multi-party computation using an indistinguishability obfuscator. (Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020

    On Pseudorandom Encodings

    Get PDF
    We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, “honey encryption” and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography

    On Pseudorandom Encodings

    Get PDF
    We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, “honey encryption” and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a twoway relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography

    Output Compression, MPC, and iO for Turing Machines

    Get PDF
    In this work, we study the fascinating notion of output-compressing randomized encodings for Turing Machines, in a shared randomness model. In this model, the encoder and decoder have access to a shared random string, and the efficiency requirement is, the size of the encoding must be independent of the running time and output length of the Turing Machine on the given input, while the length of the shared random string is allowed to grow with the length of the output. We show how to construct output- compressing randomized encodings for Turing machines in the shared randomness model, assuming iO for circuits and any assumption in the set {LWE, DDH, Nth Residuosity}. We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO): 1. Compact MPC for Turing Machines in the Random Oracle Model: In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing Machines whose communication complexity is independent of the running time and output length of the Turing Machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. Hubacek and Wichs [HW15] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (non-compact) MPC protocol in the plain model to a compact MPC protocol for Turing Machines in the Random Oracle Model, assuming output-compressing randomized encodings in the shared randomness model. 2. Succinct iO for Turing Machines in the Shared Randomness Model: In all existing constructions of iO for Turing Machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing Machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set {LWE, DDH, Nth Residuosity}

    Concurrent-Secure Two-Party Computation in Two Rounds from Subexponential LWE

    Get PDF
    Very recently, two works were able to construct two-round secure multi-party computation (MPC) protocols in the plain model, without setup, relying on the superpolynomial simulation framework of Pass [Pas03]. The first work [ABG+21] achieves this relying on subexponential non-interactive witness indistinguishable arguments, the subexponential SXDH assumption, and the existence of a special type of non-interactive non-malleable commitment. The second work [FJK21] additionally achieves concurrent security, and relies on subexponential quantum hardness of the learning-with-errors (LWE) problem, subexponential classical hardness of SXDH, the existence of a subexponentially-secure (classically-hard) indistinguishablity obfuscation (iO) scheme, and time-lock puzzles. This paper focuses on the assumptions necessary to construct secure computation protocols in two rounds without setup, focusing on the subcase of two-party functionalities. In this particular case, we show how to build a two-round, concurrent-secure, two-party computation (2PC) protocol based on a single, standard, post-quantum assumption, namely subexponential hardness of the learning-with-errors (LWE) problem. We note that our protocol is the first two-round concurrent-secure 2PC protocol that does not require the existence of a one-round non-malleable commitment (NMC). Instead, we are able to use the two-round NMCs of [KS17a], which is instantiable from subexponential LWE

    Better Two-Round Adaptive Multi-Party Computation

    Get PDF
    The only known two-round multi-party computation protocol that withstands adaptive corruption of all parties is the ingenious protocol of Garg and Polychroniadou [TCC 15]. We present protocols that improve on the GP protocol in a number of ways. First, concentrating on the semi-honest case and taking a different approach than GP, we show a two-round, adaptively secure protocol where: Only a global (i.e., non-programmable) reference string is needed. In contrast, in GP the reference string is programmable, even in the semi-honest case. Only polynomially-secure indistinguishability obfuscation for circuits and injective one way functions are assumed. In GP, sub-exponentially secure IO is assumed. Second, we show how to make the GP protocol have only RAM complexity, even for Byzantine corruptions. For this we construct the first statistically-sound non-interactive Zero-Knowledge scheme with RAM complexity

    From FE Combiners to Secure MPC and Back

    Get PDF
    Functional encryption (FE) has incredible applications towards computing on encrypted data. However, constructing the most general form of this primitive has remained elusive. Although some candidate constructions exist, they rely on nonstandard assumptions, and thus, their security has been questioned. An FE combiner attempts to make use of these candidates while minimizing the trust placed on any individual FE candidate. Informally, an FE combiner takes in a set of FE candidates and outputs a secure FE scheme if at least one of the candidates is secure. Another fundamental area in cryptography is secure multi-party computation (MPC), which has been extensively studied for several decades. In this work, we initiate a formal study of the relationship between functional encryption (FE) combiners and secure multi-party computation (MPC). In particular, we show implications in both directions between these primitives. As a consequence of these implications, we obtain the following main results. 1) A two round semi-honest MPC protocol in the plain model secure against up to (n-1) corruptions with communication complexity proportional only to the depth of the circuit being computed assuming LWE. Prior two round protocols that achieved this communication complexity required a common reference string. 2) A functional encryption combiner based on pseudorandom generators (PRGs) in NC^1. Such PRGs can be instantiated from assumptions such as DDH and LWE. Previous constructions of FE combiners were known only from the learning with errors assumption. Using this result, we build a universal construction of functional encryption: an explicit construction of functional encryption based only on the assumptions that functional encryption exists and PRGs in NC^1
    • …
    corecore