428 research outputs found

    On the Necessity of Collapsing for Post-Quantum and Quantum Commitments

    Get PDF
    Collapse binding and collapsing were proposed by Unruh (Eurocrypt \u2716) as post-quantum strengthenings of computational binding and collision resistance, respectively. These notions have been very successful in facilitating the "lifting" of classical security proofs to the quantum setting. A basic and natural question remains unanswered, however: are they the weakest notions that suffice for such lifting? In this work we answer this question in the affirmative by giving a classical commit-and-open protocol which is post-quantum secure if and only if the commitment scheme (resp. hash function) used is collapse binding (resp. collapsing). We also generalise the definition of collapse binding to quantum commitment schemes, and prove that the equivalence carries over when the sender in this commit-and-open protocol communicates quantum information. As a consequence, we establish that a variety of "weak" binding notions (sum binding, CDMS binding and unequivocality) are in fact equivalent to collapse binding, both for post-quantum and quantum commitments. Finally, we prove a "win-win" result, showing that a post-quantum computationally binding commitment scheme that is not collapse binding can be used to build an equivocal commitment scheme (which can, in turn, be used to build one-shot signatures and other useful quantum primitives). This strengthens a result due to Zhandry (Eurocrypt \u2719) showing that the same object yields quantum lightning

    Unforgeable Quantum Encryption

    Get PDF
    We study the problem of encrypting and authenticating quantum data in the presence of adversaries making adaptive chosen plaintext and chosen ciphertext queries. Classically, security games use string copying and comparison to detect adversarial cheating in such scenarios. Quantumly, this approach would violate no-cloning. We develop new techniques to overcome this problem: we use entanglement to detect cheating, and rely on recent results for characterizing quantum encryption schemes. We give definitions for (i.) ciphertext unforgeability , (ii.) indistinguishability under adaptive chosen-ciphertext attack, and (iii.) authenticated encryption. The restriction of each definition to the classical setting is at least as strong as the corresponding classical notion: (i) implies INT-CTXT, (ii) implies IND-CCA2, and (iii) implies AE. All of our new notions also imply QIND-CPA privacy. Combining one-time authentication and classical pseudorandomness, we construct schemes for each of these new quantum security notions, and provide several separation examples. Along the way, we also give a new definition of one-time quantum authentication which, unlike all previous approaches, authenticates ciphertexts rather than plaintexts.Comment: 22+2 pages, 1 figure. v3: error in the definition of QIND-CCA2 fixed, some proofs related to QIND-CCA2 clarifie

    Algebraic Restriction Codes and Their Applications

    Get PDF
    Consider the following problem: You have a device that is supposed to compute a linear combination of its inputs, which are taken from some finite field. However, the device may be faulty and compute arbitrary functions of its inputs. Is it possible to encode the inputs in such a way that only linear functions can be evaluated over the encodings? I.e., learning an arbitrary function of the encodings will not reveal more information about the inputs than a linear combination. In this work, we introduce the notion of algebraic restriction codes (AR codes), which constrain adversaries who might compute any function to computing a linear function. Our main result is an information-theoretic construction AR codes that restrict any class of function with a bounded number of output bits to linear functions. Our construction relies on a seed which is not provided to the adversary. While interesting and natural on its own, we show an application of this notion in cryptography. In particular, we show that AR codes lead to the first construction of rate-1 oblivious transfer with statistical sender security from the Decisional Diffie-Hellman assumption, and the first-ever construction that makes black-box use of cryptography. Previously, such protocols were known only from the LWE assumption, using non-black-box cryptographic techniques. We expect our new notion of AR codes to find further applications, e.g., in the context of non-malleability, in the future

    Circuit-ABE from LWE: Unbounded Attributes and Semi-adaptive Security

    Get PDF
    We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth d circuit can have as many as 2d inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length. We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the “complexity leveraging” technique is not applicable for unbounded attributes). We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.National Science Foundation (U.S.) (Award CNS-1350619)National Science Foundation (U.S.) (Grant CNS-1413964)United States-Israel Binational Science Foundation (Grant 712307

    Replacing Probability Distributions in Security Games via Hellinger Distance

    Get PDF
    Security of cryptographic primitives is usually proved by assuming "ideal" probability distributions. We need to replace them with approximated "real" distributions in the real-world systems without losing the security level. We demonstrate that the Hellinger distance is useful for this problem, while the statistical distance is mainly used in the cryptographic literature. First, we show that for preserving ?-bit security of a given security game, the closeness of 2^{-?/2} to the ideal distribution is sufficient for the Hellinger distance, whereas 2^{-?} is generally required for the statistical distance. The result can be applied to both search and decision primitives through the bit security framework of Micciancio and Walter (Eurocrypt 2018). We also show that the Hellinger distance gives a tighter evaluation of closeness than the max-log distance when the distance is small. Finally, we show that the leftover hash lemma can be strengthened to the Hellinger distance. Namely, a universal family of hash functions gives a strong randomness extractor with optimal entropy loss for the Hellinger distance. Based on the results, a ?-bit entropy loss in randomness extractors is sufficient for preserving ?-bit security. The current understanding based on the statistical distance is that a 2?-bit entropy loss is necessary

    The Bottleneck Complexity of Secure Multiparty Computation

    Get PDF
    In this work, we initiate the study of bottleneck complexity as a new communication efficiency measure for secure multiparty computation (MPC). Roughly, the bottleneck complexity of an MPC protocol is defined as the maximum communication complexity required by any party within the protocol execution. We observe that even without security, bottleneck communication complexity is an interesting measure of communication complexity for (distributed) functions and propose it as a fundamental area to explore. While achieving O(n) bottleneck complexity (where n is the number of parties) is straightforward, we show that: (1) achieving sublinear bottleneck complexity is not always possible, even when no security is required. (2) On the other hand, several useful classes of functions do have o(n) bottleneck complexity, when no security is required. Our main positive result is a compiler that transforms any (possibly insecure) efficient protocol with fixed communication-pattern for computing any functionality into a secure MPC protocol while preserving the bottleneck complexity of the underlying protocol (up to security parameter overhead). Given our compiler, an efficient protocol for any function f with sublinear bottleneck complexity can be transformed into an MPC protocol for f with the same bottleneck complexity. Along the way, we build cryptographic primitives - incremental fully-homomorphic encryption, succinct non-interactive arguments of knowledge with ID-based simulation-extractability property and verifiable protocol execution - that may be of independent interest

    Non-Malleable Codes for Small-Depth Circuits

    Get PDF
    We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e. AC0\mathsf{AC^0} tampering functions), our codes have codeword length n=k1+o(1)n = k^{1+o(1)} for a kk-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length 2O(k)2^{O(\sqrt{k})}. Our construction remains efficient for circuit depths as large as Θ(log⁥(n)/log⁥log⁥(n))\Theta(\log(n)/\log\log(n)) (indeed, our codeword length remains n≀k1+Ï”)n\leq k^{1+\epsilon}), and extending our result beyond this would require separating P\mathsf{P} from NC1\mathsf{NC^1}. We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure

    Practical Witness Encryption for Algebraic Languages Or How to Encrypt Under Groth-Sahai Proofs

    Get PDF
    Witness encryption (WE) is a recent powerful encryption paradigm, which allows to encrypt a message using the description of a hard problem (a word in an NP-language) and someone who knows a solution to this problem (a witness) is able to efficiently decrypt the ciphertext. Recent work thereby focuses on constructing WE for NP complete languages (and thus NP). While this rich expressiveness allows flexibility w.r.t. applications, it makes existing instantiations impractical. Thus, it is interesting to study practical variants of WE schemes for subsets of NP that are still expressive enough for many cryptographic applications. We show that such WE schemes can be generically constructed from smooth projective hash functions (SPHFs). In terms of concrete instantiations of SPHFs (and thus WE), we target languages of statements proven in the popular Groth-Sahai (GS) non-interactive witness-indistinguishable/zero-knowledge proof framework. This allows us to provide a novel way to encrypt. In particular, encryption is with respect to a GS proof and efficient decryption can only be done by the respective prover. The so obtained constructions are entirely practical. To illustrate our techniques, we apply them in context of privacy-preserving exchange of information
    • 

    corecore