123 research outputs found

    A Relativization Perspective on Meta-Complexity

    Get PDF
    Meta-complexity studies the complexity of computational problems about complexity theory, such as the Minimum Circuit Size Problem (MCSP) and its variants. We show that a relativization barrier applies to many important open questions in meta-complexity. We give relativized worlds where: 1) MCSP can be solved in deterministic polynomial time, but the search version of MCSP cannot be solved in deterministic polynomial time, even approximately. In contrast, Carmosino, Impagliazzo, Kabanets, Kolokolova [CCC'16] gave a randomized approximate search-to-decision reduction for MCSP with a relativizing proof. 2) The complexities of MCSP[2^{n/2}] and MCSP[2^{n/4}] are different, in both worst-case and average-case settings. Thus the complexity of MCSP is not "robust" to the choice of the size function. 3) Levin’s time-bounded Kolmogorov complexity Kt(x) can be approximated to a factor (2+ε) in polynomial time, for any ε > 0. 4) Natural proofs do not exist, and neither do auxiliary-input one-way functions. In contrast, Santhanam [ITCS'20] gave a relativizing proof that the non-existence of natural proofs implies the existence of one-way functions under a conjecture about optimal hitting sets. 5) DistNP does not reduce to GapMINKT by a family of "robust" reductions. This presents a technical barrier for solving a question of Hirahara [FOCS'20]

    A New View on Worst-Case to Average-Case Reductions for NP Problems

    Full text link
    We study the result by Bogdanov and Trevisan (FOCS, 2003), who show that under reasonable assumptions, there is no non-adaptive worst-case to average-case reduction that bases the average-case hardness of an NP-problem on the worst-case complexity of an NP-complete problem. We replace the hiding and the heavy samples protocol in [BT03] by employing the histogram verification protocol of Haitner, Mahmoody and Xiao (CCC, 2010), which proves to be very useful in this context. Once the histogram is verified, our hiding protocol is directly public-coin, whereas the intuition behind the original protocol inherently relies on private coins

    Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions

    Get PDF
    What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP ? ? DTIME[2^{O?(?n)}]. - ??TIME[n] cannot be solved in quasi-linear time on average if ?_kSAT cannot be solved in time 2^{O?(?n)} for some constant k. Previously, it was not known if even average-case hardness of ??SAT implies the average-case hardness of ??TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+?}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+?} for some constant ? > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Structural Average Case Complexity

    Get PDF
    AbstractLevin introduced an average-case complexity measure, based on a notion of “polynomial on average,” and defined “average-case polynomial-time many-one reducibility” among randomized decision problems. We generalize his notions of average-case complexity classes, Random-NP and Average-P. Ben-Davidet al. use the notation of 〈C, F〉 to denote the set of randomized decision problems (L, μ) such thatLis a set in C andμis a probability density function in F. This paper introduces Aver〈C, F〉 as the class of randomized decision problems (L, μ) such thatLis computed by a type-C machine onμ-average andμis a density function in F. These notations capture all known average-case complexity classes as, for example, Random-NP= 〈NP, P-comp〉 and Average-P=Aver〈P, ∗〉, where P-comp denotes the set of density functions whose distributions are computable in polynomial time, and ∗ denotes the set of all density functions. Mainly studied are polynomial-time reductions between randomized decision problems: many–one, deterministic Turing and nondeterministic Turing reductions and the average-case versions of them. Based on these reducibilities, structural properties of average-case complexity classes are discussed. We give average-case analogues of concepts in worst-case complexity theory; in particular, the polynomial time hierarchy and Turing self-reducibility, and we show that all known complete sets for Random-NP are Turing self-reducible. A new notion of “real polynomial-time computations” is introduced based on average polynomial-time computations for arbitrary distributions from a fixed set, and it is used to characterize the worst-case complexity classesΔpkandΣpkof the polynomial-time hierarchy

    The Journey from NP to TFNP Hardness

    Get PDF
    The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist. Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption). Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP? In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs

    A Framework of Quantum Strong Exponential-Time Hypotheses

    Get PDF

    Logical strength of complexity theory and a formalization of the PCP theorem in bounded arithmetic

    Full text link
    We present several known formalizations of theorems from computational complexity in bounded arithmetic and formalize the PCP theorem in the theory PV1 (no formalization of this theorem was known). This includes a formalization of the existence and of some properties of the (n,d,{\lambda})-graphs in PV1

    Complexity of certificates, heuristics, and counting types , with applications to cryptography and circuit theory

    Get PDF
    In dieser Habilitationsschrift werden Struktur und Eigenschaften von Komplexitätsklassen wie P und NP untersucht, vor allem im Hinblick auf: Zertifikatkomplexität, Einwegfunktionen, Heuristiken gegen NP-Vollständigkeit und Zählkomplexität. Zum letzten Punkt werden speziell untersucht: (a) die Komplexität von Zähleigenschaften von Schaltkreisen, (b) Separationen von Zählklassen mit Immunität und (c) die Komplexität des Zählens der Lösungen von ,,tally`` NP-Problemen

    Bounded Relativization

    Get PDF
    Relativization is one of the most fundamental concepts in complexity theory, which explains the difficulty of resolving major open problems. In this paper, we propose a weaker notion of relativization called bounded relativization. For a complexity class ?, we say that a statement is ?-relativizing if the statement holds relative to every oracle ? ? ?. It is easy to see that every result that relativizes also ?-relativizes for every complexity class ?. On the other hand, we observe that many non-relativizing results, such as IP = PSPACE, are in fact PSPACE-relativizing. First, we use the idea of bounded relativization to obtain new lower bound results, including the following nearly maximum circuit lower bound: for every constant ? > 0, BPE^{MCSP}/2^{?n} ? SIZE[2?/n]. We prove this by PSPACE-relativizing the recent pseudodeterministic pseudorandom generator by Lu, Oliveira, and Santhanam (STOC 2021). Next, we study the limitations of PSPACE-relativizing proof techniques, and show that a seemingly minor improvement over the known results using PSPACE-relativizing techniques would imply a breakthrough separation NP ? L. For example: - Impagliazzo and Wigderson (JCSS 2001) proved that if EXP ? BPP, then BPP admits infinitely-often subexponential-time heuristic derandomization. We show that their result is PSPACE-relativizing, and that improving it to worst-case derandomization using PSPACE-relativizing techniques implies NP ? L. - Oliveira and Santhanam (STOC 2017) recently proved that every dense subset in P admits an infinitely-often subexponential-time pseudodeterministic construction, which we observe is PSPACE-relativizing. Improving this to almost-everywhere (pseudodeterministic) or (infinitely-often) deterministic constructions by PSPACE-relativizing techniques implies NP ? L. - Santhanam (SICOMP 2009) proved that pr-MA does not have fixed polynomial-size circuits. This lower bound can be shown PSPACE-relativizing, and we show that improving it to an almost-everywhere lower bound using PSPACE-relativizing techniques implies NP ? L. In fact, we show that if we can use PSPACE-relativizing techniques to obtain the above-mentioned improvements, then PSPACE ? EXPH. We obtain our barrier results by constructing suitable oracles computable in EXPH relative to which these improvements are impossible
    corecore