182 research outputs found

    Faster Algorithms for Computing the Hairpin Completion Distance and Minimum Ancestor

    Get PDF

    Homomorphic encryption and some black box attacks

    Full text link
    This paper is a compressed summary of some principal definitions and concepts in the approach to the black box algebra being developed by the authors. We suggest that black box algebra could be useful in cryptanalysis of homomorphic encryption schemes, and that homomorphic encryption is an area of research where cryptography and black box algebra may benefit from exchange of ideas

    Generic-Group Delay Functions Require Hidden-Order Groups

    Get PDF
    Despite the fundamental importance of delay functions, underlying both the classic notion of a time-lock puzzle and the more recent notion of a verifiable delay function, the only known delay function that offers both sufficient structure for realizing these two notions and a realistic level of practicality is the ``iterated squaring\u27\u27 construction of Rivest, Shamir and Wagner. This construction, however, is based on rather strong assumptions in groups of hidden orders, such as the RSA group (which requires a trusted setup) or the class group of an imaginary quadratic number field (which is still somewhat insufficiently explored from the cryptographic perspective). For more than two decades, the challenge of constructing delay functions in groups of known orders, admitting a variety of well-studied instantiations, has eluded the cryptography community. In this work we prove that there are no constructions of generic-group delay functions in cyclic groups of known orders: We show that for any delay function that does not exploit any particular property of the representation of the underlying group, there exists an attacker that completely breaks the function\u27s sequentiality when given the group\u27s order. As any time-lock puzzle and verifiable delay function give rise to a delay function, our result holds for these two notions we well, and explains the lack of success in resolving the above-mentioned long-standing challenge. Moreover, our result holds even if the underlying group is equipped with a dd-linear map, for any constant d≥2d \geq 2 (and even for super-constant values of dd under certain conditions)

    Tight Reductions for Diffie-Hellman Variants in the Algebraic Group Model

    Get PDF
    Fuchsbauer, Kiltz, and Loss~(Crypto\u2718) gave a simple and clean definition of an ÂĄemph{algebraic group model~(AGM)} that lies in between the standard model and the generic group model~(GGM). Specifically, an algebraic adversary is able to exploit group-specific structures as the standard model while the AGM successfully provides meaningful hardness results as the GGM. As an application of the AGM, they show a tight computational equivalence between the computing Diffie-Hellman~(CDH) assumption and the discrete logarithm~(DL) assumption. For the purpose, they used the square Diffie-Hellman assumption as a bridge, i.e., they first proved the equivalence between the DL assumption and the square Diffie-Hellman assumption, then used the known equivalence between the square Diffie-Hellman assumption and the CDH assumption. In this paper, we provide an alternative proof that directly shows the tight equivalence between the DL assumption and the CDH assumption. The crucial benefit of the direct reduction is that we can easily extend the approach to variants of the CDH assumption, e.g., the bilinear Diffie-Hellman assumption. Indeed, we show several tight computational equivalences and discuss applicabilities of our techniques

    Optimal Broadcast Encryption from LWE and Pairings in the Standard Model

    Get PDF
    Broadcast Encryption with optimal parameters was a long-standing problem, whose first solution was provided in an elegant work by Boneh, Waters and Zhandry [BWZ14]. However, this work relied on multilinear maps of logarithmic degree, which is not considered a standard assumption. Recently, Agrawal and Yamada [AY20] improved this state of affairs by providing the first construction of optimal broadcast encryption from Bilinear Maps and Learning With Errors (LWE). However, their proof of security was in the generic bilinear group model. In this work, we improve upon their result by providing a new construction and proof in the standard model. In more detail, we rely on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) [BW19] on bilinear groups. Our construction combines three building blocks: a (computational) nearly linear secret sharing scheme with compact shares which we construct from LWE, an inner-product functional encryption scheme with special properties which is constructed from the bilinear Matrix Decision Diffie Hellman (MDDH) assumption, and a certain form of hyperplane obfuscation, which is constructed using the KOALA assumption. While similar to that of Agrawal and Yamada, our construction provides a new understanding of how to decompose the construction into simpler, modular building blocks with concrete and easy-to-understand security requirements for each one. We believe this sheds new light on the requirements for optimal broadcast encryption, which may lead to new constructions in the future

    Indistinguishability Obfuscation: From Approximate to Exact

    Get PDF
    We show general transformations from subexponentially-secure approximate indistinguishability obfuscation (IO) where the obfuscated circuit agrees with the original circuit on a 1/2+ϵ fraction of inputs on a certain samplable distribution, into exact indistinguishability obfuscation where the obfuscated circuit and the original circuit agree on all inputs. As a step towards our results, which is of independent interest, we also obtain an approximate-to-exact transformation for functional encryption. At the core of our techniques is a method for “fooling” the obfuscator into giving us the correct answer, while preserving the indistinguishability-based security. This is achieved based on various types of secure computation protocols that can be obtained from different standard assumptions. Put together with the recent results of Canetti, Kalai and Paneth (TCC 2015), Pass and Shelat (TCC 2016), and Mahmoody, Mohammed and Nemathaji (TCC 2016), we show how to convert indistinguishability obfuscation schemes in various ideal models into exact obfuscation schemes in the plain model.National Science Foundation (U.S.) (Grant CNS-1350619)National Science Foundation (U.S.) (Grant CNS-1414119

    Generically Speeding-Up Repeated Squaring is Equivalent to Factoring: Sharp Thresholds for All Generic-Ring Delay Functions

    Get PDF
    Despite the fundamental importance of delay functions, repeated squaring in RSA groups (Rivest, Shamir and Wagner \u2796) is the main candidate offering both a useful structure and a realistic level of practicality. Somewhat unsatisfyingly, its sequentiality is provided directly by assumption (i.e., the function is assumed to be a delay function). We prove sharp thresholds on the sequentiality of all generic-ring delay functions relative to an RSA modulus based on the hardness of factoring in the standard model. In particular, we show that generically speeding-up repeated squaring (even with a preprocessing stage and any polynomial number parallel processors) is equivalent to factoring. More generally, based on the (essential) hardness of factoring, we prove that any generic-ring function is in fact a delay function, admitting a sharp sequentiality threshold that is determined by our notion of sequentiality depth. Moreover, we show that generic-ring functions admit not only sharp sequentiality thresholds, but also sharp pseudorandomness thresholds

    Optimal Broadcast Encryption from Pairings and LWE

    Get PDF
    Boneh, Waters and Zhandry (CRYPTO 2014) used multilinear maps to provide a solution to the long-standing problem of public-key broadcast encryption (BE) where all parameters in the system are small. In this work, we improve their result by providing a solution that uses only bilinear maps and Learning With Errors (LWE). Our scheme is fully collusion-resistant against any number of colluders, and can be generalized to an identity-based broadcast system with short parameters. Thus, we reclaim the problem of optimal broadcast encryption from the land of “Obfustopia”. Our main technical contribution is a ciphertext policy attribute based encryption (CP-ABE) scheme which achieves special efficiency properties – its ciphertext size, secret key size, and public key size are all independent of the size of the circuits supported by the scheme. We show that this special CP-ABE scheme implies BE with optimal parameters; but it may also be of independent interest. Our constructions rely on a novel interplay of bilinear maps and LWE, and are proven secure in the generic group model

    Computational Indistinguishability between Quantum States and Its Cryptographic Application

    Full text link
    We introduce a computational problem of distinguishing between two specific quantum states as a new cryptographic problem to design a quantum cryptographic scheme that is "secure" against any polynomial-time quantum adversary. Our problem, QSCDff, is to distinguish between two types of random coset states with a hidden permutation over the symmetric group of finite degree. This naturally generalizes the commonly-used distinction problem between two probability distributions in computational cryptography. As our major contribution, we show that QSCDff has three properties of cryptographic interest: (i) QSCDff has a trapdoor; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is computationally at least as hard as the graph automorphism problem in the worst case. These cryptographic properties enable us to construct a quantum public-key cryptosystem, which is likely to withstand any chosen plaintext attack of a polynomial-time quantum adversary. We further discuss a generalization of QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme that relies on similar cryptographic properties of QSCDcyc.Comment: 24 pages, 2 figures. We improved presentation, and added more detail proofs and follow-up of recent wor

    New Constructions of Reusable Designated-Verifier NIZKs

    Get PDF
    Non-interactive zero-knowledge arguments (NIZKs) for NP are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the learning with errors (LWE) assumption, the Diffie-Hellman (CDH/DDH) assumption, and the learning parity with noise (LPN) assumption. In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE or LPN. We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the ``one-more CDH\u27\u27 assumption, but constructions under CDH/LWE/LPN remained open. In this work, we give new constructions of (reusable) DV-NIZKs and MDV-NIZKs using generic primitives that can be instantiated under CDH, LWE, or LPN
    • …
    corecore