5,188 research outputs found

    Average-case intractability vs. worst-case intractability

    Get PDF
    AbstractWe show that not all sets in NP (or other levels of the polynomial-time hierarchy) have efficient average-case algorithms unless the Arthur-Merlin classes MA and AM can be derandomized to NP and various subclasses of P/poly collapse to P. Furthermore, other complexity classes like P(PP) and PSPACE are shown to be intractable on average unless they are easy in the worst case

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P≠\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Computational Complexity for Physicists

    Full text link
    These lecture notes are an informal introduction to the theory of computational complexity and its links to quantum computing and statistical mechanics.Comment: references updated, reprint available from http://itp.nat.uni-magdeburg.de/~mertens/papers/complexity.shtm

    CSP-Completeness And Its Applications

    Get PDF
    We build off of previous ideas used to study both reductions between CSPrefutation problems and improper learning and between CSP-refutation problems themselves to expand some hardness results that depend on the assumption that refuting random CSP instances are hard for certain choices of predicates (like k-SAT). First, we are able argue the hardness of the fundamental problem of learning conjunctions in a one-sided PAC-esque learning model that has appeared in several forms over the years. In this model we focus on producing a hypothesis that foremost guarantees a small false-positive rate while minimizing the false-negative rate for such hypotheses. Further, we formalize a notion of CSP-refutation reductions and CSP-refutation completeness that and use these, along with candidate CSP-refutatation complete predicates, to provide further evidence for the hardness of several problems

    Randomness and intractability in Kolmogorov complexity

    Get PDF
    We introduce randomized time-bounded Kolmogorov complexity (rKt), a natural extension of Levin's notion [Leonid A. Levin, 1984] of Kolmogorov complexity. A string w of low rKt complexity can be decompressed from a short representation via a time-bounded algorithm that outputs w with high probability. This complexity measure gives rise to a decision problem over strings: MrKtP (The Minimum rKt Problem). We explore ideas from pseudorandomness to prove that MrKtP and its variants cannot be solved in randomized quasi-polynomial time. This exhibits a natural string compression problem that is provably intractable, even for randomized computations. Our techniques also imply that there is no n^{1 - epsilon}-approximate algorithm for MrKtP running in randomized quasi-polynomial time. Complementing this lower bound, we observe connections between rKt, the power of randomness in computing, and circuit complexity. In particular, we present the first hardness magnification theorem for a natural problem that is unconditionally hard against a strong model of computation

    Entanglement and complexity of interacting qubits subject to asymmetric noise

    Full text link
    The simulation complexity of predicting the time evolution of delocalized many-body quantum systems has attracted much recent interest, and simulations of such systems in real quantum hardware are promising routes to demonstrating a quantum advantage over classical machines. In these proposals, random noise is an obstacle that must be overcome for a faithful simulation, and a single error event can be enough to drive the system to a classically trivial state. We argue that this need not always be the case, and consider a modification to a leading quantum sampling problem-- time evolution in an interacting Bose-Hubbard chain of transmon qubits [Neill et al, Science 2018] -- where each site in the chain has a driven coupling to a lossy resonator and particle number is no longer conserved. The resulting quantum dynamics are complex and highly nontrivial. We argue that this problem is harder to simulate than the isolated chain, and that it can achieve volume-law entanglement even in the strong noise limit, likely persisting up to system sizes beyond the scope of classical simulation. Further, we show that the metrics which suggest classical intractability for the isolated chain point to similar conclusions in the noisy case. These results suggest that quantum sampling problems including nontrivial noise could be good candidates for demonstrating a quantum advantage in near-term hardware.Comment: 20 pages, 15 figure

    Course and long term outcome of childhood onset epilepsy

    Get PDF

    Course and long term outcome of childhood onset epilepsy

    Get PDF

    People Efficiently Explore the Solution Space of the Computationally Intractable Traveling Salesman Problem to Find Near-Optimal Tours

    Get PDF
    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (“good” edges) were significantly more likely to stay than other edges (“bad” edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants “ran out of ideas.” In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics
    • 

    corecore