411 research outputs found

    On Basing Private Information Retrieval on NP-Hardness

    Get PDF
    The possibility of basing the security of cryptographic objects on the (minimal) assumption that \comp{NP} \nsubseteq \comp{BPP} is at the very heart of complexity-theoretic cryptography. Most known results along these lines are negative, showing that assuming widely believed complexity-theoretic conjectures, there are no reductions from an \comp{NP}-hard problem to the task of breaking certain cryptographic schemes. We make progress along this line of inquiry by showing that the security of single-server single-round private information retrieval schemes cannot be based on \comp{NP}-hardness, unless the polynomial hierarchy collapses. Our main technical contribution is in showing how to break the security of a PIR protocol given an \comp{SZK} oracle. Our result is tight in terms of both the correctness and the privacy parameter of the PIR scheme

    On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions

    Get PDF
    Constructing one-way functions based on NP-hardness is a central challenge in theoretical computer science. Unfortunately, Akavia et al. [Akavia et al., 2006] presented strong evidence that a nonadaptive black-box (BB) reduction is insufficient to solve this challenge. However, should we give up such a central proof technique even for an intermediate step? In this paper, we turn our eyes from standard cryptographic primitives to weaker cryptographic primitives allowed to take auxiliary-input and continue to explore the capability of nonadaptive BB reductions to base auxiliary-input primitives on NP-hardness. Specifically, we prove the followings: - if we base an auxiliary-input pseudorandom generator (AIPRG) on NP-hardness via a nonadaptive BB reduction, then the polynomial hierarchy collapses; - if we base an auxiliary-input one-way function (AIOWF) or auxiliary-input hitting set generator (AIHSG) on NP-hardness via a nonadaptive BB reduction, then an (i.o.-)one-way function also exists based on NP-hardness (via an adaptive BB reduction). These theorems extend our knowledge on nonadaptive BB reductions out of the current worst-to-average framework. The first result provides new evidence that nonadaptive BB reductions are insufficient to base AIPRG on NP-hardness. The second result also yields a weaker but still surprising consequence of nonadaptive BB reductions, i.e., a one-way function based on NP-hardness. In fact, the second result is interpreted in the following two opposite ways. Pessimistically, it shows that basing AIOWF or AIHSG on NP-hardness via nonadaptive BB reductions is harder than constructing a one-way function based on NP-hardness, which can be regarded as a negative result. Note that AIHSG is a weak primitive implied even by the hardness of learning; thus, this pessimistic view provides conceptually stronger limitations than the currently known limitations on nonadaptive BB reductions. Optimistically, it offers a new hope: breakthrough construction of auxiliary-input primitives might also provide construction standard cryptographic primitives. This optimistic view enhances the significance of further investigation on constructing auxiliary-input or other intermediate cryptographic primitives instead of standard cryptographic primitives

    Distributed PCP Theorems for Hardness of Approximation in P

    Get PDF
    We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x{0,1}nx \in \{0,1\}^n to a CNF formula φ\varphi is shared between two parties, where Alice knows x1,,xn/2x_1, \dots, x_{n/2}, Bob knows xn/2+1,,xnx_{n/2+1},\dots,x_n, and both parties know φ\varphi. The goal is to have Alice and Bob jointly write a PCP that xx satisfies φ\varphi, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of xx. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of 2(logn)1o(1)2^{(\log n)^{1-o(1)}}; only (1+o(1))(1+o(1))-factor lower bounds (under SETH) were known before

    On Basing Search SIVP on NP-Hardness

    Get PDF
    The possibility of basing cryptography on the minimal assumption NP\nsubseteqBPP is at the very heart of complexity-theoretic cryptography. The closest we have gotten so far is lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the O~(n)\tilde{O}(n)-approximate shortest independent vector problem SIVPO~(n)\text{SIVP}_{\tilde O(n)}. Although SIVP is NP-hard in its exact version, Guruswami et al (CCC 2004) showed that gapSIVPn/logn\text{gapSIVP}_{\sqrt{n/\log n}} is in NP\capcoAM and thus unlikely to be NP-hard. Indeed, any language that can be reduced to gapSIVPO~(n)\text{gapSIVP}_{\tilde O(\sqrt n)} (under general probabilistic polynomial-time adaptive reductions) is in AM\capcoAM by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can NP be reduced to solving search SIVP with approximation factor O~(n)\tilde O(n)? We eliminate such possibility, by showing that any language that can be reduced to solving search SIVPγ\text{SIVP}_{\gamma} with any approximation factor γ(n)=ω(nlogn)\gamma(n) = \omega(n\log n) lies in AM intersect coAM. As a side product, we show that any language that can be reduced to discrete Gaussian sampling with parameter O~(n)λn\tilde O(\sqrt n)\cdot\lambda_n lies in AM intersect coAM

    Unprovability of Leakage-Resilient Cryptography Beyond the Information-Theoretic Limit

    Get PDF
    In recent years, leakage-resilient cryptography---the design of cryptographic protocols resilient to bounded leakage of honest players\u27 secrets---has received significant attention. A major limitation of known provably-secure constructions (based on polynomial hardness assumptions) is that they require the secrets to have sufficient actual (i.e., information-theoretic), as opposed to computational, min-entropy even after the leakage. In this work, we present barriers to provably-secure constructions beyond the ``information-theoretic barrier\u27\u27: Assume the existence of collision-resistant hash functions. Then, no NP search problem with (2nϵ)(2^{n^{\epsilon}})-bounded number of witnesses can be proven (even worst-case) hard in the presence of O(nϵ)O(n^{\epsilon}) bits of computationally-efficient leakage of the witness, using a black-box reduction to any O(1)O(1)-round assumption. In particular, this implies that O(nϵ)O(n^{\epsilon})-leakage resilient injective one-way functions, and more generally, one-way functions with at most 2nϵ2^{n^{\epsilon}} pre-images, cannot be based on any ``standard\u27\u27 complexity assumption using a black-box reduction

    Efficient Fully Homomorphic Encryption from (Standard) LWE

    Get PDF
    A fully homomorphic encryption (FHE) scheme allows anyone to transform an encryption of a message, m, into an encryption of any (efficient) function of that message, f(m), without knowing the secret key. We present a leveled FHE scheme that is based solely on the (standard) learning with errors (LWE) assumption. (Leveled FHE schemes are initialized with a bound on the maximal evaluation depth. However, this restriction can be removed by assuming “weak circular security.”) Applying known results on LWE, the security of our scheme is based on the worst-case hardness of “short vector problems” on arbitrary lattices. Our construction improves on previous works in two aspects: 1. We show that “somewhat homomorphic” encryption can be based on LWE, using a new relinearization technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. 2. We deviate from the “squashing paradigm” used in all previous works. We introduce a new dimension-modulus reduction technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. Our scheme has very short ciphertexts, and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is k·polylog(k)+log |DB| bits per single-bit query, in order to achieve security against 2k-time adversaries (based on the best known attacks against our underlying assumptions). Key words. cryptology, public-key encryption, fully homomorphic encryption, learning with errors, private information retrieva

    Structure vs Hardness through the Obfuscation Lens

    Get PDF
    Much of modern cryptography, starting from public-key encryption and going beyond, is based on the hardness of structured (mostly algebraic) problems like factoring, discrete log, or finding short lattice vectors. While structure is perhaps what enables advanced applications, it also puts the hardness of these problems in question. In particular, this structure often puts them in low (and so called structured) complexity classes such as NP\capcoNP or statistical zero-knowledge (SZK). Is this structure really necessary? For some cryptographic primitives, such as one-way permutations and homomorphic encryption, we know that the answer is yes — they imply hard problems in NP\capcoNP and SZK, respectively. In contrast, one-way functions do not imply such hard problems, at least not by black-box reductions. Yet, for many basic primitives such as public-key encryption, oblivious transfer, and functional encryption, we do not have any answer. We show that the above primitives, and many others, do not imply hard problems in NP\capcoNP or SZK via black-box reductions. In fact, we first show that even the very powerful notion of Indistinguishability Obfuscation (IO) does not imply such hard problems, and then deduce the same for a large class of primitives that can be constructed from IO

    Hardness vs. (Very Little) Structure in Cryptography: A Multi-Prover Interactive Proofs Perspective

    Get PDF
    The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich\u27s early work (PhD Thesis \u2788) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO \u2717). Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has NP\mathsf{NP}-verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes NPcoNP\mathsf{NP} \cap \mathsf{coNP} or SZK\mathsf{SZK}, respectively, from a black-box perspective. In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class MIPcoMIP\mathsf{MIP} \cap \mathsf{coMIP} from a black-box perspective. Conceptually, given that MIP=NEXP\mathsf{MIP} = \mathsf{NEXP}, our result rules out languages with very little structure. Additionally, we prove a similar result for collision-resistant hash functions, and more generally for any cryptographic primitive that exists relative to a random oracle. Already the cases of languages that have IP\mathsf{IP} or AM\mathsf{AM} proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of IP\mathsf{IP}, where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of AM\mathsf{AM}, where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have NP\mathsf{NP}-verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system
    corecore