9 research outputs found

    Using autoreducibility to separate complexity classes

    Get PDF

    Pseudorandom Self-Reductions for NP-Complete Problems

    Get PDF
    A language L is random-self-reducible if deciding membership in L can be reduced (in polynomial time) to deciding membership in L for uniformly random instances. It is known that several "number theoretic" languages (such as computing the permanent of a matrix) admit random self-reductions. Feigenbaum and Fortnow showed that NP-complete languages are not non-adaptively random-self-reducible unless the polynomial-time hierarchy collapses, giving suggestive evidence that NP may not admit random self-reductions. Hirahara and Santhanam introduced a weakening of random self-reductions that they called pseudorandom self-reductions, in which a language L is reduced to a distribution that is computationally indistinguishable from the uniform distribution. They then showed that the Minimum Circuit Size Problem (MCSP) admits a non-adaptive pseudorandom self-reduction, and suggested that this gave further evidence that distinguished MCSP from standard NP-Complete problems. We show that, in fact, the Clique problem admits a non-adaptive pseudorandom self-reduction, assuming the planted clique conjecture. More generally we show the following. Call a property of graphs ? hereditary if G ? ? implies H ? ? for every induced subgraph of G. We show that for any infinite hereditary property ?, the problem of finding a maximum induced subgraph H ? ? of a given graph G admits a non-adaptive pseudorandom self-reduction

    On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions

    Get PDF
    Constructing one-way functions based on NP-hardness is a central challenge in theoretical computer science. Unfortunately, Akavia et al. [Akavia et al., 2006] presented strong evidence that a nonadaptive black-box (BB) reduction is insufficient to solve this challenge. However, should we give up such a central proof technique even for an intermediate step? In this paper, we turn our eyes from standard cryptographic primitives to weaker cryptographic primitives allowed to take auxiliary-input and continue to explore the capability of nonadaptive BB reductions to base auxiliary-input primitives on NP-hardness. Specifically, we prove the followings: - if we base an auxiliary-input pseudorandom generator (AIPRG) on NP-hardness via a nonadaptive BB reduction, then the polynomial hierarchy collapses; - if we base an auxiliary-input one-way function (AIOWF) or auxiliary-input hitting set generator (AIHSG) on NP-hardness via a nonadaptive BB reduction, then an (i.o.-)one-way function also exists based on NP-hardness (via an adaptive BB reduction). These theorems extend our knowledge on nonadaptive BB reductions out of the current worst-to-average framework. The first result provides new evidence that nonadaptive BB reductions are insufficient to base AIPRG on NP-hardness. The second result also yields a weaker but still surprising consequence of nonadaptive BB reductions, i.e., a one-way function based on NP-hardness. In fact, the second result is interpreted in the following two opposite ways. Pessimistically, it shows that basing AIOWF or AIHSG on NP-hardness via nonadaptive BB reductions is harder than constructing a one-way function based on NP-hardness, which can be regarded as a negative result. Note that AIHSG is a weak primitive implied even by the hardness of learning; thus, this pessimistic view provides conceptually stronger limitations than the currently known limitations on nonadaptive BB reductions. Optimistically, it offers a new hope: breakthrough construction of auxiliary-input primitives might also provide construction standard cryptographic primitives. This optimistic view enhances the significance of further investigation on constructing auxiliary-input or other intermediate cryptographic primitives instead of standard cryptographic primitives

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Language compression and pseudorandom generators

    Get PDF

    Security ranking among assumptions within the Uber assumption framework

    Get PDF
    Over the past decade bilinear maps have been used to build a large variety of cryptosystems. In parallel to new functionalities, we have also seen the emergence of many security assumptions. This leads to the general question of comparing two such assumptions. Boneh, Boyen and Goh introduced the Uber assumption as an attempt to offer a general framework for security assessment. Their idea is to propose a generic security assumption that can be specialized to suit the needs of any proof of protocols involving bilinear pairing. Even though the Uber assumption has been only stated in the bilinear setting, it can be easily restated to deal with ordinary Diffie-Hellman groups and assess other type of protocols. In this article, we explore some particular instances of the Uber assumption; namely the n-CDH-assumption, the nth-CDH-assumption and the Q-CDH-assumption. We analyse the relationships between those assumption and more precisely from a security point of view. Our analysis does not rely on any special property of the considered group(s) and does not use the generic group model

    Average-Case Fine-Grained Hardness

    Get PDF
    We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL \u2794), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems -- namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) -- in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2−o(1)n^{2-o(1)} time to compute on average, and that of APSP gives us a function that requires n3−o(1)n^{3-o(1)} time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO \u2703)
    corecore