793 research outputs found

    How to Generate and use Universal Samplers

    Get PDF
    The random oracle is an idealization that allows us to model a hash function as an oracle that will output a uniformly random string given any input. We introduce the notion of a universal sampler scheme that extends the notion of a random oracle, to a method of sampling securely from arbitrary distributions. We describe several applications that provide a natural motivation for this notion; these include generating the trusted parameters for many schemes from just a single trusted setup. We further demonstrate the versatility of universal samplers by showing how they give rise to simple constructions of identity-based encryption and multiparty key exchange. In particular, we construct adaptively secure non-interactive multiparty key exchange in the random oracle model based on indistinguishability obfuscation; obtaining the first known construction of adaptively secure NIKE without complexity leveraging. We give a solution that shows how to transform any random oracle into a universal sampler scheme, based on indistinguishability obfuscation. At the heart of our construction and proof is a new technique we call “delayed backdoor programming” that we believe will have other applications

    A Framework for Efficient Adaptively Secure Composable Oblivious Transfer in the ROM

    Get PDF
    Oblivious Transfer (OT) is a fundamental cryptographic protocol that finds a number of applications, in particular, as an essential building block for two-party and multi-party computation. We construct a round-optimal (2 rounds) universally composable (UC) protocol for oblivious transfer secure against active adaptive adversaries from any OW-CPA secure public-key encryption scheme with certain properties in the random oracle model (ROM). In terms of computation, our protocol only requires the generation of a public/secret-key pair, two encryption operations and one decryption operation, apart from a few calls to the random oracle. In~terms of communication, our protocol only requires the transfer of one public-key, two ciphertexts, and three binary strings of roughly the same size as the message. Next, we show how to instantiate our construction under the low noise LPN, McEliece, QC-MDPC, LWE, and CDH assumptions. Our instantiations based on the low noise LPN, McEliece, and QC-MDPC assumptions are the first UC-secure OT protocols based on coding assumptions to achieve: 1) adaptive security, 2) optimal round complexity, 3) low communication and computational complexities. Previous results in this setting only achieved static security and used costly cut-and-choose techniques.Our instantiation based on CDH achieves adaptive security at the small cost of communicating only two more group elements as compared to the gap-DH based Simplest OT protocol of Chou and Orlandi (Latincrypt 15), which only achieves static security in the ROM

    Guaranteeing correctness in privacy-friendly outsourcing by certificate validation

    Get PDF
    With computation power in the cloud becoming a commodity, it is more and more convenient to outsource computations to external computation parties. Assuring confidentiality, even of inputs by mutually distrusting inputters, is possible by distributing computations between different parties using multiparty computation. Unfortunately, this typically only guarantees correctness if a limited number of computation parties are malicious. If correctness is needed when all computation parties are malicious, then one currently needs either fully homomorphic encryption or ``universally verifiable'' multiparty computation; both are impractical for large computations. In this paper, we show for the first time how to achieve practical privacy-friendly outsourcing with correctness guarantees, by using normal multiparty techniques to compute the result of a computation, and then using slower verifiable techniques only to verify that this result was correct. We demonstrate the feasibility of our approach in a linear programming case study. Keywords: secret sharing , threshold cryptography, zero knowledg

    On Pseudorandom Encodings

    Get PDF
    We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, “honey encryption” and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography

    Encryptor Combiners: A Unified Approach to Multiparty NIKE, (H)IBE, and Broadcast Encryption

    Get PDF
    We define the concept of an encryptor combiner. Roughly, such a combiner takes as input n public keys for a public key encryption scheme, and produces a new combined public key. Anyone knowing a secret key for one of the input public keys can learn the secret key for the combined public key, but an outsider who just knows the input public keys (who can therefore compute the combined public key for himself) cannot decrypt ciphertexts from the combined public key. We actually think of public keys more generally as encryption procedures, which can correspond to, say, encrypting to a particular identity under an IBE scheme or encrypting to a set of attributes under an ABE scheme. We show that encryptor combiners satisfying certain natural properties can give natural constructions of multi-party non-interactive key exchange, low-overhead broadcast encryption, and hierarchical identity-based encryption. We then show how to construct two different encryptor combiners. Our first is built from universal samplers (which can in turn be built from indistinguishability obfuscation) and is sufficient for each application above, in some cases improving on existing obfuscation-based constructions. Our second is built from lattices, and is sufficient for hierarchical identity-based encryption. Thus, encryptor combiners serve as a new abstraction that (1) is a useful tool for designing cryptosystems, (2) unifies constructing hierarchical IBE from vastly different assumptions, and (3) provides a target for instantiating obfuscation applications from better tools

    Adaptively Secure Constrained Pseudorandom Functions

    Get PDF
    A constrained pseudo random function (PRF) behaves like a standard PRF, but with the added feature that the (master) secret key holder, having secret key K, can produce a constrained key, K_f, that allows for the evaluation of the PRF on a subset of the domain as determined by a predicate function f within some family F. While previous constructions gave constrained PRFs for poly-sized circuits, all reductions for such functionality were based in the selective model of security where an attacker declares which point he is attacking before seeing any constrained keys. In this paper we give new constrained PRF constructions for circuits that have polynomial reductions to indistinguishability obfuscation in the random oracle model. Our solution is constructed from two recently emerged primitives: an adaptively secure Attribute-Based Encryption (ABE) for circuits and a Universal Parameters as introduced by Hofheinz et al. Both primitives are constructible from indistinguishability obfuscation (iO) (and injective pseudorandom generators) with only polynomial loss

    Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares

    Get PDF
    International audienceThreshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into nn shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1≤nt+1 \leq n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to tt corrupted servers; {\it i.e.}, it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical {\it fully distributed} (the private key is born distributed), non-interactive schemes -- where the servers can compute their partial signatures without communication with other servers -- with adaptive security ({\it i.e.}, the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1)O(1), where certain solutions incur O(n)O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems). To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born -- although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player)
    • …
    corecore