4 research outputs found

    Hardness vs. (Very Little) Structure in Cryptography: A Multi-Prover Interactive Proofs Perspective

    Get PDF
    The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich\u27s early work (PhD Thesis \u2788) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO \u2717). Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has NP\mathsf{NP}-verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes NP∩coNP\mathsf{NP} \cap \mathsf{coNP} or SZK\mathsf{SZK}, respectively, from a black-box perspective. In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class MIP∩coMIP\mathsf{MIP} \cap \mathsf{coMIP} from a black-box perspective. Conceptually, given that MIP=NEXP\mathsf{MIP} = \mathsf{NEXP}, our result rules out languages with very little structure. Additionally, we prove a similar result for collision-resistant hash functions, and more generally for any cryptographic primitive that exists relative to a random oracle. Already the cases of languages that have IP\mathsf{IP} or AM\mathsf{AM} proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of IP\mathsf{IP}, where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of AM\mathsf{AM}, where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have NP\mathsf{NP}-verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system

    Sum-of-squares proofs and the quest toward optimal algorithms

    Full text link
    In order to obtain the best-known guarantees, algorithms are traditionally tailored to the particular problem we want to solve. Two recent developments, the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method, surprisingly suggest that this tailoring is not necessary and that a single efficient algorithm could achieve best possible guarantees for a wide range of different problems. The Unique Games Conjecture (UGC) is a tantalizing conjecture in computational complexity, which, if true, will shed light on the complexity of a great many problems. In particular this conjecture predicts that a single concrete algorithm provides optimal guarantees among all efficient algorithms for a large class of computational problems. The Sum-of-Squares (SOS) method is a general approach for solving systems of polynomial constraints. This approach is studied in several scientific disciplines, including real algebraic geometry, proof complexity, control theory, and mathematical programming, and has found applications in fields as diverse as quantum information theory, formal verification, game theory and many others. We survey some connections that were recently uncovered between the Unique Games Conjecture and the Sum-of-Squares method. In particular, we discuss new tools to rigorously bound the running time of the SOS method for obtaining approximate solutions to hard optimization problems, and how these tools give the potential for the sum-of-squares method to provide new guarantees for many problems of interest, and possibly to even refute the UGC.Comment: Survey. To appear in proceedings of ICM 201

    The Complexity of Public-Key Cryptography

    Get PDF
    We survey the computational foundations for public-key cryptography. We discuss the computational assumptions that have been used as bases for public-key encryption schemes, and the types of evidence we have for the veracity of these assumptions. This survey/tutorial was published in the book Tutorials on the Foundations of Cryptography , dedicated to Oded Goldreich on his 60th birthday

    Structure vs Hardness through the Obfuscation Lens

    Get PDF
    Much of modern cryptography, starting from public-key encryption and going beyond, is based on the hardness of structured (mostly algebraic) problems like factoring, discrete log, or finding short lattice vectors. While structure is perhaps what enables advanced applications, it also puts the hardness of these problems in question. In particular, this structure often puts them in low (and so called structured) complexity classes such as NP∩\capcoNP or statistical zero-knowledge (SZK). Is this structure really necessary? For some cryptographic primitives, such as one-way permutations and homomorphic encryption, we know that the answer is yes — they imply hard problems in NP∩\capcoNP and SZK, respectively. In contrast, one-way functions do not imply such hard problems, at least not by black-box reductions. Yet, for many basic primitives such as public-key encryption, oblivious transfer, and functional encryption, we do not have any answer. We show that the above primitives, and many others, do not imply hard problems in NP∩\capcoNP or SZK via black-box reductions. In fact, we first show that even the very powerful notion of Indistinguishability Obfuscation (IO) does not imply such hard problems, and then deduce the same for a large class of primitives that can be constructed from IO
    corecore