2,721 research outputs found

    Hardness measures and resolution lower bounds

    Full text link
    Various "hardness" measures have been studied for resolution, providing theoretical insight into the proof complexity of resolution and its fragments, as well as explanations for the hardness of instances in SAT solving. In this report we aim at a unified view of a number of hardness measures, including different measures of width, space and size of resolution proofs. We also extend these measures to all clause-sets (possibly satisfiable).Comment: 43 pages, preliminary version (yet the application part is only sketched, with proofs missing

    Proof Complexity of Resolution-based QBF Calculi

    Get PDF

    On extracting computations from propositional proofs (a survey)

    Get PDF
    This paper describes a project that aims at showing that propositional proofs of certain tautologies in weak proof system give upper bounds on the computational complexity of functions associated with the tautologies. Such bounds can potentially be used to prove (conditional or unconditional) lower bounds on the lengths of proofs of these tautologies and show separations of some weak proof systems. The prototype are the results showing the feasible interpolation property for resolution. In order to prove similar results for systems stronger than resolution one needs to define suitable generalizations of boolean circuits. We will survey the known results concerning this project and sketch in which direction we want to generalize them

    Information in propositional proofs and algorithmic proof search

    Full text link
    We study from the proof complexity perspective the (informal) proof search problem: Is there an optimal way to search for propositional proofs? We note that for any fixed proof system there exists a time-optimal proof search algorithm. Using classical proof complexity results about reflection principles we prove that a time-optimal proof search algorithm exists w.r.t. all proof systems iff a p-optimal proof system exists. To characterize precisely the time proof search algorithms need for individual formulas we introduce a new proof complexity measure based on algorithmic information concepts. In particular, to a proof system PP we attach {\bf information-efficiency function} iP(Ï„)i_P(\tau) assigning to a tautology a natural number, and we show that: - iP(Ï„)i_P(\tau) characterizes time any PP-proof search algorithm has to use on Ï„\tau and that for a fixed PP there is such an information-optimal algorithm, - a proof system is information-efficiency optimal iff it is p-optimal, - for non-automatizable systems PP there are formulas Ï„\tau with short proofs but having large information measure iP(Ï„)i_P(\tau). We isolate and motivate the problem to establish {\em unconditional} super-logarithmic lower bounds for iP(Ï„)i_P(\tau) where no super-polynomial size lower bounds are known. We also point out connections of the new measure with some topics in proof complexity other than proof search.Comment: Preliminary version February 202

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Reasons for Hardness in QBF Proof Systems

    Get PDF
    We aim to understand inherent reasons for lower bounds for QBF proof systems, and revisit and compare two previous approaches in this direction. The first of these relates size lower bounds for strong QBF Frege systems to circuit lower bounds via strategy extraction (Beyersdorff & Pich, LICS\u2716). Here we show a refined version of strategy extraction and thereby for any QBF proof system obtain a trichotomy for hardness: (1) via circuit lower bounds, (2) via propositional Resolution lower bounds, or (3) `genuine\u27 QBF lower bounds. The second approach tries to explain QBF lower bounds through quantifier alternations in a system called relaxing QU-Res (Chen, ICALP\u2716). We prove a strong lower bound for relaxing QU-Res, which also exhibits significant shortcomings of that model. Prompted by this we propose an alternative, improved version, allowing more flexible oracle queries in proofs. We show that lower bounds in our new model correspond to the trichotomy obtained via strategy extraction

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page
    • …
    corecore