11,541 research outputs found

    The intractability of resolution

    Get PDF
    AbstractWe prove that, for infinitely many disjunctive normal form propositional calculus tautologies ξ, the length of the shortest resolution proof of ξ cannot be bounded by any polynomial of the length of ξ. The tautologies we use were introduced by Cook and Reckhow (1979) and encode the pigeonhole principle. Extended resolution can furnish polynomial length proofs of these formulas

    Appeals to evidence for the resolution of wicked problems: the origins and mechanisms of evidentiary bias

    Get PDF
    Wicked policy problems are often said to be characterized by their ‘intractability’, whereby appeals to evidence are unable to provide policy resolution. Advocates for ‘Evidence Based Policy’ (EBP) often lament these situations as representing the misuse of evidence for strategic ends, while critical policy studies authors counter that policy decisions are fundamentally about competing values, with the (blind) embrace of technical evidence depoliticizing political decisions. This paper aims to help resolve these conflicts and, in doing so, consider how to address this particular feature of problem wickedness. Specifically the paper delineates two forms of evidentiary bias that drive intractability, each of which is reflected by contrasting positions in the EBP debates: ‘technical bias’ - referring to invalid uses of evidence; and ‘issue bias’ - referring to how pieces of evidence direct policy agendas to particular concerns. Drawing on the fields of policy studies and cognitive psychology, the paper explores the ways in which competing interests and values manifest in these forms of bias, and shape evidence utilization through different mechanisms. The paper presents a conceptual framework reflecting on how the nature of policy problems in terms of their complexity, contestation, and polarization can help identify the potential origins and mechanisms of evidentiary bias leading to intractability in some wicked policy debates. The discussion reflects on whether being better informed about such mechanisms permit future work that may lead to strategies to mitigate or overcome such intractability in the future

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P≠\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Revisiting Shor's quantum algorithm for computing general discrete logarithms

    Full text link
    We heuristically demonstrate that Shor's algorithm for computing general discrete logarithms, modified to allow the semi-classical Fourier transform to be used with control qubit recycling, achieves a success probability of approximately 60% to 82% in a single run. By slightly increasing the number of group operations that are evaluated quantumly, and by performing a limited search in the classical post-processing, we furthermore show how the algorithm can be modified to achieve a success probability exceeding 99% in a single run. We provide concrete heuristic estimates of the success probability of the modified algorithm, as a function of the group order, the size of the search space in the classical post-processing, and the additional number of group operations evaluated quantumly. In analogy with our earlier works, we show how the modified quantum algorithm may be simulated classically when the logarithm and group order are both known. Furthermore, we show how slightly better tradeoffs may be achieved, compared to our earlier works, if the group order is known when computing the logarithm.Comment: The pre-print has been extended to show how slightly better tradeoffs may be achieved, compared to our earlier works, if the group order is known. A minor issue with an integration limit, that lead us to give a rough success probability estimate of 60% to 70%, as opposed to 60% to 82%, has been corrected. The heuristic and results reported in the original pre-print are otherwise unaffecte

    Image recognition with an adiabatic quantum computer I. Mapping to quadratic unconstrained binary optimization

    Full text link
    Many artificial intelligence (AI) problems naturally map to NP-hard optimization problems. This has the interesting consequence that enabling human-level capability in machines often requires systems that can handle formally intractable problems. This issue can sometimes (but possibly not always) be resolved by building special-purpose heuristic algorithms, tailored to the problem in question. Because of the continued difficulties in automating certain tasks that are natural for humans, there remains a strong motivation for AI researchers to investigate and apply new algorithms and techniques to hard AI problems. Recently a novel class of relevant algorithms that require quantum mechanical hardware have been proposed. These algorithms, referred to as quantum adiabatic algorithms, represent a new approach to designing both complete and heuristic solvers for NP-hard optimization problems. In this work we describe how to formulate image recognition, which is a canonical NP-hard AI problem, as a Quadratic Unconstrained Binary Optimization (QUBO) problem. The QUBO format corresponds to the input format required for D-Wave superconducting adiabatic quantum computing (AQC) processors.Comment: 7 pages, 3 figure
    • …
    corecore