196 research outputs found

    On Pseudorandom Encodings

    Get PDF
    We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, “honey encryption” and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography

    On Pseudorandom Encodings

    Get PDF
    We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, “honey encryption” and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a twoway relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography

    On Foundations of Protecting Computations

    Get PDF
    Information technology systems have become indispensable to uphold our way of living, our economy and our safety. Failure of these systems can have devastating effects. Consequently, securing these systems against malicious intentions deserves our utmost attention. Cryptography provides the necessary foundations for that purpose. In particular, it provides a set of building blocks which allow to secure larger information systems. Furthermore, cryptography develops concepts and tech- niques towards realizing these building blocks. The protection of computations is one invaluable concept for cryptography which paves the way towards realizing a multitude of cryptographic tools. In this thesis, we contribute to this concept of protecting computations in several ways. Protecting computations of probabilistic programs. An indis- tinguishability obfuscator (IO) compiles (deterministic) code such that it becomes provably unintelligible. This can be viewed as the ultimate way to protect (deterministic) computations. Due to very recent research, such obfuscators enjoy plausible candidate constructions. In certain settings, however, it is necessary to protect probabilistic com- putations. The only known construction of an obfuscator for probabilistic programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and requires an indistinguishability obfuscator which satisfies extreme security guarantees. We improve this construction and thereby reduce the require- ments on the security of the underlying indistinguishability obfuscator. (Agrikola, Couteau, and Hofheinz, PKC, 2020) Protecting computations in cryptographic groups. To facilitate the analysis of building blocks which are based on cryptographic groups, these groups are often overidealized such that computations in the group are protected from the outside. Using such overidealizations allows to prove building blocks secure which are sometimes beyond the reach of standard model techniques. However, these overidealizations are subject to certain impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018 introduced the algebraic group model (AGM) as a relaxation which is closer to the standard model but in several aspects preserves the power of said overidealizations. However, their model still suffers from implausibilities. We develop a framework which allows to transport several security proofs from the AGM into the standard model, thereby evading the above implausi- bility results, and instantiate this framework using an indistinguishability obfuscator. (Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020) Protecting computations using compression. Perfect compression algorithms admit the property that the compressed distribution is truly random leaving no room for any further compression. This property is invaluable for several cryptographic applications such as “honey encryption” or password-authenticated key exchange. However, perfect compression algorithms only exist for a very small number of distributions. We relax the notion of compression and rigorously study the resulting notion which we call “pseudorandom encodings”. As a result, we identify various surprising connections between seemingly unrelated areas of cryptography. Particularly, we derive novel results for adaptively secure multi-party computation which allows for protecting computations in distributed settings. Furthermore, we instantiate the weakest version of pseudorandom encodings which suffices for adaptively secure multi-party computation using an indistinguishability obfuscator. (Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020

    Better Two-Round Adaptive Multi-Party Computation

    Get PDF
    The only known two-round multi-party computation protocol that withstands adaptive corruption of all parties is the ingenious protocol of Garg and Polychroniadou [TCC 15]. We present protocols that improve on the GP protocol in a number of ways. First, concentrating on the semi-honest case and taking a different approach than GP, we show a two-round, adaptively secure protocol where: Only a global (i.e., non-programmable) reference string is needed. In contrast, in GP the reference string is programmable, even in the semi-honest case. Only polynomially-secure indistinguishability obfuscation for circuits and injective one way functions are assumed. In GP, sub-exponentially secure IO is assumed. Second, we show how to make the GP protocol have only RAM complexity, even for Byzantine corruptions. For this we construct the first statistically-sound non-interactive Zero-Knowledge scheme with RAM complexity

    Towards Multiparty Computation Withstanding Coercion of All Parties

    Get PDF
    Incoercible multi-party computation (Canetti-Gennaro ’96) allows parties to engage in secure computation with the additional guarantee that the public transcript of the computation cannot be used by a coercive outsider to verify representations made by the parties regarding their inputs, outputs, and local random choices. That is, it is guaranteed that the only deductions regarding the truthfulness of such representations, made by an outsider who has witnessed the communication among the parties, are the ones that can be drawn just from the represented inputs and outputs alone. To date, all incoercible secure computation protocols withstand coercion of only a fraction of the parties, or else assume that all parties use an execution environment that makes some crucial parts of their local states physically inaccessible even to themselves. We consider, for the first time, the setting where all parties are coerced, and the coercer expects to see the entire history of the computation. We allow both protocol participants and external attackers to access a common reference string which is generated once and for all by an uncorruptable trusted party. In this setting we construct: - A general multi-party function evaluation protocol, for any number of parties, that withstands coercion of all parties, as long as all parties use the prescribed ``faking algorithm\u27\u27 upon coercion. This holds even if the inputs and outputs represented by coerced parties are globally inconsistent with the evaluated function. - A general two-party function evaluation protocol that withstands even the %``mixed\u27\u27 case where some of the coerced parties do follow the prescribed faking algorithm. (For instance, these parties might collude with the coercer and disclose their true local states.) This protocol is limited to functions where the input of at least one of the parties is taken from a small (poly-size) domain. It uses fully deniable encryption with public deniability for one of the parties; when instantiated using the fully deniable encryption of Canetti, Park, and Poburinnaya (Crypto\u2720), it takes 3 rounds of communication. Both protocols operate in the common reference string model, and use fully bideniable encryption (Canetti Park and Poburinnaya, Crypto\u2720) and sub-exponential indistinguishability obfuscation. Finally, we show that protocols with certain communication pattern cannot be incoercible, even in a weaker setting where only some parties are coerced

    Studies in incoercible and adaptively secure computation

    Full text link
    Despite being a relatively young field, cryptography taught us how to perform seemingly-impossible tasks, which now became part of our everyday life. One of them is secure multiparty computation (MPC), which allows mutually distrustful parties to jointly perform a computation on their private inputs, so that each party only learns its prescribed output, but nothing else. In this work we deal with two longstanding challenges of MPC: adaptive security and deniability (or, incoercibility). A protocol is said to be adaptively secure, if it still guarantees security for the remaining honest parties, even if some parties turn dishonest during the execution of the protocol, or even after the execution. (In contrast, statically secure protocols give security guarantees only when the set of dishonest parties is fixed before the execution starts.) While adaptive security threat model is often more realistic than the static one, there is a huge gap between efficiency of statically and adaptively secure protocols: adaptively secure protocols often require more complicated constructions, stronger assumptions, and more rounds of interaction. We improve in efficiency over the state of the art in adaptive security for a number of settings, including the first adaptively secure MPC protocol in constant number of rounds, under assumptions comparable to those of static protocols (previously known protocols required as many rounds of interaction as the depth of the circuit being computed). The second challenge we deal with is providing resilience in the situation where an external coercer demands that participants disclose their private inputs and all their secret keys - e.g. via threats, bribe, or court order. Deniable (or, incoercible) protocols allow coerced participants to convincingly lie about their inputs and secret keys, thereby still maintaining their privacy. While the concept was proposed more than twenty years ago, to date secure protocols withstanding coercion of all participants were not known, even for the simple case of encryption. We present the first construction of such an encryption scheme, and then show how to combine it with adaptively secure protocols to obtain the first incoercible MPC which withstands coercion of all parties

    Garbling Schemes and Applications

    Get PDF
    The topic of this thesis is garbling schemes and their applications. A garbling scheme is a set of algorithms for realizing secure two-party computation. A party called a client possesses a private algorithm as well as a private input and would like to compute the algorithm with this input. However, the client might not have enough computational resources to evaluate the function with the input on his own. The client outsources the computation to another party, called an evaluator. Since the client wants to protect the algorithm and the input, he cannot just send the algorithm and the input to the evaluator. With a garbling scheme, the client can protect the privacy of the algorithm, the input and possibly also the privacy of the output. The increase in network-based applications has arisen concerns about the privacy of user data. Therefore, privacy-preserving or privacy-enhancing techniques have gained interest in recent research. Garbling schemes seem to be an ideal solution for privacy-preserving applications. First of all, secure garbling schemes hide the algorithm and its input. Secondly, garbling schemes are known to have efficient implementations. In this thesis, we propose two applications utilizing garbling schemes. The first application provides privacy-preserving electronic surveillance. The second application extends electronic surveillance to more versatile monitoring, including also health telemetry. This kind of application would be ideal for assisted living services. In this work, we also present theoretical results related to garbling schemes. We present several new security definitions for garbling schemes which are of practical use. Traditionally, the same garbled algorithm can be evaluated once with garbled input. In applications, the same function is often evaluated several times with different inputs. Recently, a solution based on fully homomorphic encryption provides arbitrarily reusable garbling schemes. The disadvantage in this approach is that the arbitrary reuse cannot be efficiently implemented due to the inefficiency of fully homomorphic encryption. We propose an alternative approach. Instead of arbitrary reusability, the same garbled algorithm could be used a limited number of times. This gives us a set of new security classes for garbling schemes. We prove several relations between new and established security definitions. As a result, we obtain a complex hierarchy which can be represented as a product of three directed graphs. The three graphs in turn represent the different flavors of security: the security notion, the security model and the level of reusability. In addition to defining new security classes, we improve the definition of side-information function, which has a central role in defining the security of a garbling scheme. The information allowed to be leaked by the garbled algorithm and the garbled input depend on the representation of the algorithm. The established definition of side-information models the side-information of circuits perfectly but does not model side-information of Turing machines as well. The established model requires that the length of the argument, the length of the final result and the length of the function can be efficiently computable from the side-information function. Moreover, the side-information depends only on the function. In other words, the length of the argument, the length of the final result and the length of the function should only depend on the function. For circuits this is a natural requirement since the number of input wires tells the size of the argument, the number of output wires tells the size of the final result and the number of gates and wires tell the size of the function. On the other hand, the description of a Turing machine does not set any limitation to the size of the argument. Therefore, side-information that depends only on the function cannot provide information about the length of the argument. To tackle this problem, we extend the model of side-information so that side-information depends on both the function and the argument. The new model of side information allows us to define new security classes. We show that the old security classes are compatible with the new model of side-information. We also prove relations between the new security classes.Tämä väitöskirja käsittelee garblausskeemoja ja niiden sovelluksia. Garblausskeema on työkalu, jota käytetään turvallisen kahden osapuolen laskennan toteuttamiseen. Asiakas pitää hallussaan yksityistä algoritmia ja sen yksityistä syötettä, joilla hän haluaisi suorittaa tietyn laskennan. Asiakkaalla ei välttämättä ole riittävästi laskentatehoa, minkä vuoksi hän ei pysty suorittamaan laskentaa itse, vaan joutuu ulkoistamaan laskennan toiselle osapuolelle, palvelimelle. Koska asiakas tahtoo suojella algoritmiaan ja syötettään, hän ei voi vain lähettää niitä palvelimen laskettavaksi. Asiakas pystyy suojelemaan syötteensä ja algoritminsa yksityisyyttä käyttämällä garblausskeemaa. Verkkopohjaisten sovellusten kasvu on herättänyt huolta käyttäjien datan yksityisyyden turvasta. Siksi yksityisyyden säilyttävien tai yksityisyyden suojaa lisäävien tekniikoiden tutkimus on saanut huomiota. Garblaustekniikan avulla voidaan suojata sekä syöte että algoritmi. Lisäksi garblaukselle tiedetään olevan useita tehokkaita toteutuksia. Näiden syiden vuoksi garblausskeemat ovat houkutteleva tekniikka käytettäväksi yksityisyyden säilyttävien sovellusten toteutuksessa. Tässä työssä esittelemme kaksi sovellusta, jotka hyödyntävät garblaustekniikkaa. Näistä ensimmäinen on yksityisyyden säilyttävä sähköinen seuranta. Toinen sovellus laajentaa seurantaa monipuolisempaan monitorointiin, kuten terveyden kaukoseurantaan. Tästä voi olla hyötyä etenkin kotihoidon palveluille. Tässä työssä esitämme myös teoreettisia tuloksia garblausskeemoihin liittyen. Esitämme garblausskeemoille uusia turvallisuusmääritelmiä, joiden tarve kumpuaa käytännön sovelluksista. Perinteisen määritelmän mukaan samaa garblattua algoritmia voi käyttää vain yhdellä garblatulla syötteellä laskemiseen. Käytännössä kuitenkin samaa algoritmia käytetään usean eri syötteen evaluoimiseen. Hiljattain on esitetty tähän ongelmaan ratkaisu, joka perustuu täysin homomorfiseen salaukseen. Tämän ratkaisun ansiosta samaa garblattua algoritmia voi turvallisesti käyttää mielivaltaisen monta kertaa. Ratkaisun haittapuoli kuitenkin on, ettei sille ole tiedossa tehokasta toteutusta, sillä täysin homomorfiseen salaukseen ei ole vielä onnistuttu löytämään sellaista. Esitämme vaihtoehtoisen näkökulman: sen sijaan, että samaa garblattua algoritmia voisi käyttää mielivaltaisen monta kertaa, sitä voikin käyttää vain tietyn, ennalta rajatun määrän kertoja. Tämä näkökulman avulla voidaan määritellä lukuisia uusia turvallisuusluokkia. Todistamme useita relaatioita uusien ja vanhojen turvallisuusmääritelmien välillä. Relaatioiden avulla garblausskeemojen turvallisuusluokille saadaan muodostettua hierarkia, joka koostuu kolmesta komponentista. Tieto, joka paljastuu garblatusta algoritmista tai garblatusta syötteestä riippuu siitä, millaisessa muodossa algoritmi on esitetty, kutsutaan sivutiedoksi. Vakiintunut määritelmä mallintaa loogisen piiriin liittyvää sivutietoa täydellisesti, mutta ei yhtä hyvin Turingin koneeseen liittyvää sivutietoa. Tämä johtuu siitä, että jokainen yksittäinen looginen piiri asettaa syötteensä pituudelle rajan, mutta yksittäisellä Turingin koneella vastaavanlaista rajoitusta ei ole. Parannamme sivutiedon määritelmää, jolloin tämä ongelma poistuu. Uudenlaisen sivutiedon avulla voidaan määritellä uusia turvallisuusluokkia. Osoitamme, että vanhat turvallisuusluokat voidaan esittää uudenkin sivutiedon avulla. Todistamme myös relaatioita uusien luokkien välillä.Siirretty Doriast

    On Adaptively Secure Multiparty Computation with a Short CRS

    Get PDF
    In the setting of multiparty computation, a set of mutually distrusting parties wish to securely compute a joint function of their private inputs. A protocol is adaptively secure if honest parties might get corrupted \emph{after} the protocol has started. Recently (TCC 2015) three constant-round adaptively secure protocols were presented [CGP15, DKR15, GP15]. All three constructions assume that the parties have access to a \emph{common reference string} (CRS) whose size depends on the function to compute, even when facing semi-honest adversaries. It is unknown whether constant-round adaptively secure protocols exist, without assuming access to such a CRS. In this work, we study adaptively secure protocols which only rely on a short CRS that is independent on the function to compute. First, we raise a subtle issue relating to the usage of \emph{non-interactive non-committing encryption} within security proofs in the UC framework, and explain how to overcome it. We demonstrate the problem in the security proof of the adaptively secure oblivious-transfer protocol from [CLOS02] and provide a complete proof of this protocol. Next, we consider the two-party setting where one of the parties has a polynomial-size input domain, yet the other has no constraints on its input. We show that assuming the existence of adaptively secure oblivious transfer, every deterministic functionality can be computed with adaptive security in a constant number of rounds. Finally, we present a new primitive called \emph{non-committing indistinguishability obfuscation}, and show that this primitive is \emph{complete} for constructing adaptively secure protocols with round complexity independent of the function

    Two-Round Adaptively Secure MPC from Isogenies, LPN, or CDH

    Get PDF
    We present a new framework for building round-optimal (two-round) adaptivelyadaptively secure MPC. We show that a relatively weak notion of OT that we call indistinguishability OT with receiver oblivious sampleabilityindistinguishability \ OT \ with \ receiver \ oblivious \ sampleability (r-iOT) is enough to build two-round, adaptively secure MPC against maliciousmalicious adversaries in the CRS model. We then show how to construct r-iOT from CDH, LPN, or isogeny-based assumptions that can be viewed as group actions (such as CSIDH and CSI-FiSh). This yields the first constructions of two-round adaptively secure MPC against malicious adversaries from CDH, LPN, or isogeny-based assumptions. We further extend our non-isogeny results to the plain model, achieving (to our knowledge) the first construction of two-round adaptively secure MPC against semi-honest adversaries in the plain model from LPN. Our results allow us to build a two-round adaptively secure MPC against malicious adversaries from essentially all of the well-studied assumptions in cryptography. In addition, our constructions from isogenies or LPN provide the first post-quantum alternatives to LWE-based constructions for round-optimal adaptively secure MPC. Along the way, we show that r-iOT also implies non-committing encryption(NCE), thereby yielding the first constructions of NCE from isogenies or LPN

    MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture

    Get PDF
    Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results: - any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs; - assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - ? fraction of machines (for an arbitrarily small constant ?) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup. As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques
    corecore