81 research outputs found

    Bounded-Collusion Attribute-Based Encryption from Minimal Assumptions

    Get PDF
    Attribute-based encryption (ABE) enables encryption of messages under access policies so that only users with attributes satisfying the policy can decrypt the ciphertext. In standard ABE, an arbitrary number of colluding users, each without an authorized attribute set, cannot decrypt the ciphertext. However, all existing ABE schemes rely on concrete cryptographic assumptions such as the hardness of certain problems over bilinear maps or integer lattices. Furthermore, it is known that ABE cannot be constructed from generic assumptions such as public-key encryption using black-box techniques. In this work, we revisit the problem of constructing ABE that tolerates collusions of arbitrary but a priori bounded size. We present an ABE scheme secure against bounded collusions that requires only semantically secure public-key encryption. Our scheme achieves significant improvement in the size of the public parameters, secret keys, and ciphertexts over the previous construction of bounded-collusion ABE from minimal assumptions by Gorbunov et al. (CRYPTO 2012). We also obtain bounded-collusion symmetric-key ABE (which requires the secret key for encryption) by replacing the public-key encryption with symmetric-key encryption, which can be built from the minimal assumption of one-way functions

    Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing

    Get PDF
    A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) 1/r1/r (in the statistical distance) in an rr-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond 1/r1/r. In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions\u27 existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known. Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any rr-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party\u27s output distribution by 1/r1/\sqrt r by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the rr-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party\u27s output distribution by more than 1/r1/\sqrt r. Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model

    The Bottleneck Complexity of Secure Multiparty Computation

    Get PDF
    In this work, we initiate the study of bottleneck complexity as a new communication efficiency measure for secure multiparty computation (MPC). Roughly, the bottleneck complexity of an MPC protocol is defined as the maximum communication complexity required by any party within the protocol execution. We observe that even without security, bottleneck communication complexity is an interesting measure of communication complexity for (distributed) functions and propose it as a fundamental area to explore. While achieving O(n) bottleneck complexity (where n is the number of parties) is straightforward, we show that: (1) achieving sublinear bottleneck complexity is not always possible, even when no security is required. (2) On the other hand, several useful classes of functions do have o(n) bottleneck complexity, when no security is required. Our main positive result is a compiler that transforms any (possibly insecure) efficient protocol with fixed communication-pattern for computing any functionality into a secure MPC protocol while preserving the bottleneck complexity of the underlying protocol (up to security parameter overhead). Given our compiler, an efficient protocol for any function f with sublinear bottleneck complexity can be transformed into an MPC protocol for f with the same bottleneck complexity. Along the way, we build cryptographic primitives - incremental fully-homomorphic encryption, succinct non-interactive arguments of knowledge with ID-based simulation-extractability property and verifiable protocol execution - that may be of independent interest

    A Tight Transformation between HILL and Metric Conditional Pseudoentropy

    Get PDF
    HILL Entropy and Metric Entropy are generalizations of the information-theoretic notion of min-entropy to the realistic setting where adversaries are computationally bounded. The notion of HILL Entropy appeared in the breakthrough construction of a PRG from any one-way function (HÃ¥stad et al.), and has become the most important and most widely used variant of computational entropy. In turn, Metric Entropy defined as a relaxation of HILL Entropy, has been proven to be much easier to handle, in particular in the context of computational generalizations of the Green-Tao-Ziegler Dense Model Theorem which find applications in leakage-resilient cryptography, memory delegation or deterministic encryption. Fortunately, Metric Entropy can be converted, with some loss in quality, to HILL Entropy as shown by Barak, Shaltiel and Wigderson. In this paper we improve their result, reducing the loss in quality of entropy. Our bound is tight and, interestingly, independent of size of the probability space. As an interesting example of application we derive the computational dense model theorem with best possible parameters. Our approach is based on the theory of convex approximation in LpL^p-spaces

    On Publicly Verifiable Delegation From Standard Assumptions

    Get PDF
    We construct a publicly verifiable non-interactive delegation scheme for log-space uniform bounded depth computations in the common reference string (CRS) model, where the CRS is long (as long as the time it takes to do the computation). The soundness of our scheme relies on the assumption that there exists a group with a bilinear map, such that given group elements g,h,ht,ht2,g,h,h^t,h^{t^2}, it is hard to output ga,gb,gcg^a,g^b,g^c and ha,hb,hch^a,h^b,h^c such that aâ‹…t2+bâ‹…t+c=0a \cdot t^2 + b \cdot t + c = 0, but a,b,ca,b,c are not all zero. Previously, such a result was only known under knowledge assumptions (or in the Random Oracle model), or under non-standard assumptions related to obfuscation or zero-testable homomorphic encryption. We obtain our result by converting the interactive delegation scheme of Goldwasser, Kalai and Rothblum (J. ACM 2015) into a publicly verifiable non-interactive one. As a stepping stone, we give a publicly verifiable non-interactive version of the sum-check protocol of Lund, Fortnow, Karloff, Nisan (J. ACM 1992)

    Concurrent Non-Malleable Commitments (and More) in 3 Rounds

    Get PDF
    The round complexity of commitment schemes secure against man-in-the-middle attacks has been the focus of extensive research for about 25 years. The recent breakthrough of Goyal et al. [22] showed that 3 rounds are sufficient for (one-left, one-right) non-malleable commitments. This result matches a lower bound of [41]. The state of affairs leaves still open the intriguing problem of constructing 3-round concurrent non-malleable commitment schemes. In this paper we solve the above open problem by showing how to transform any 3-round (one-left one-right) non-malleable commitment scheme (with some extractability property) in a 3-round concurrent nonmalleable commitment scheme. Our transform makes use of complexity leveraging and when instantiated with the construction of [22] gives a 3-round concurrent non-malleable commitment scheme from one-way permutations secure w.r.t. subexponential-time adversaries. We also show a 3-round arguments of knowledge and a 3-round identification scheme secure against concurrent man-in-the-middle attacks

    Publicly Verifiable Proofs from Blockchains

    Get PDF
    A proof system is publicly verifiable, if anyone, by looking at the transcript of the proof, can be convinced that the corresponding theorem is true. Public verifiability is important in many applications since it allows to compute a proof only once while convincing an unlimited number of verifiers. Popular interactive proof systems (e.g., Σ\Sigma-protocols) protect the witness through various properties (e.g., witness indistinguishability (WI) and zero knowledge (ZK)) but typically they are not publicly verifiable since such proofs are convincing only for those verifiers who contributed to the transcripts of the proofs. The only known proof systems that are publicly verifiable rely on a non-interactive (NI) prover, through trust assumptions (e.g., NIZK in the CRS model), heuristic assumptions (e.g., NIZK in the random oracle model),specific number-theoretic assumptions on bilinear groups or relying on obfuscation assumptions (obtaining NIWI with no setups). In this work we construct publicly verifiable witness-indistinguishable proof systems from any Σ\Sigma-protocol, based only on the existence of a very generic blockchain. The novelty of our approach is in enforcing a non-interactive verification (thus guaranteeing public verifiability) while allowing the prover to be interactive and talk to the blockchain (this allows us to circumvent the need of strong assumptions and setups). This opens interesting directions for the design of cryptographic protocols leveraging on blockchain technology

    Reusable Garbled Deterministic Finite Automata from Learning With Errors

    Get PDF

    Verifiable Random Functions from Non-Interactive Witness-Indistinguishable Proofs

    Get PDF
    {\em Verifiable random functions} (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function\u27s value yy at any point xx, can also generate a non-interactive proof π\pi that yy is correct, without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed towards the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not well understood. We present new constructions of VRFs from general primitives, the main one being {\em non-interactive witness-indistinguishable proofs} (NIWIs). This includes: \begin{itemize} \item A selectively-secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively-secure assuming subexponential hardness of the underlying primitives. \item An adaptively-secure VRF assuming (polynomially-hard) NIWIs, non-interactive commitments, and {\em (single-key) constrained pseudorandom functions} for a restricted class of constraints. \end{itemize} The above primitives can be instantiated under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known so far. One notable example is a non-uniform construction of VRFs from subexponentially-hard trapdoor permutations, or more generally, from {\em verifiable pseudorandom generators} (the construction can be made uniform under a standard derandomization assumption). This partially answers an open question by Dwork and Naor (FOCS \u2700). The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of {\em indistinguishability obfuscation}
    • …
    corecore