31 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Algorithms and Certificates for Boolean CSP Refutation: "Smoothed is no harder than Random"

    Full text link
    We present an algorithm for strongly refuting smoothed instances of all Boolean CSPs. The smoothed model is a hybrid between worst and average-case input models, where the input is an arbitrary instance of the CSP with only the negation patterns of the literals re-randomized with some small probability. For an nn-variable smoothed instance of a kk-arity CSP, our algorithm runs in nO(ℓ)n^{O(\ell)} time, and succeeds with high probability in bounding the optimum fraction of satisfiable constraints away from 11, provided that the number of constraints is at least O~(n)(nℓ)k2−1\tilde{O}(n) (\frac{n}{\ell})^{\frac{k}{2} - 1}. This matches, up to polylogarithmic factors in nn, the trade-off between running time and the number of constraints of the state-of-the-art algorithms for refuting fully random instances of CSPs [RRS17]. We also make a surprising new connection between our algorithm and even covers in hypergraphs, which we use to positively resolve Feige's 2008 conjecture, an extremal combinatorics conjecture on the existence of even covers in sufficiently dense hypergraphs that generalizes the well-known Moore bound for the girth of graphs. As a corollary, we show that polynomial-size refutation witnesses exist for arbitrary smoothed CSP instances with number of constraints a polynomial factor below the "spectral threshold" of nk/2n^{k/2}, extending the celebrated result for random 3-SAT of Feige, Kim and Ofek [FKO06]

    Information in propositional proofs and algorithmic proof search

    Full text link
    We study from the proof complexity perspective the (informal) proof search problem: Is there an optimal way to search for propositional proofs? We note that for any fixed proof system there exists a time-optimal proof search algorithm. Using classical proof complexity results about reflection principles we prove that a time-optimal proof search algorithm exists w.r.t. all proof systems iff a p-optimal proof system exists. To characterize precisely the time proof search algorithms need for individual formulas we introduce a new proof complexity measure based on algorithmic information concepts. In particular, to a proof system PP we attach {\bf information-efficiency function} iP(Ï„)i_P(\tau) assigning to a tautology a natural number, and we show that: - iP(Ï„)i_P(\tau) characterizes time any PP-proof search algorithm has to use on Ï„\tau and that for a fixed PP there is such an information-optimal algorithm, - a proof system is information-efficiency optimal iff it is p-optimal, - for non-automatizable systems PP there are formulas Ï„\tau with short proofs but having large information measure iP(Ï„)i_P(\tau). We isolate and motivate the problem to establish {\em unconditional} super-logarithmic lower bounds for iP(Ï„)i_P(\tau) where no super-polynomial size lower bounds are known. We also point out connections of the new measure with some topics in proof complexity other than proof search.Comment: Preliminary version February 202

    CSP-Completeness And Its Applications

    Get PDF
    We build off of previous ideas used to study both reductions between CSPrefutation problems and improper learning and between CSP-refutation problems themselves to expand some hardness results that depend on the assumption that refuting random CSP instances are hard for certain choices of predicates (like k-SAT). First, we are able argue the hardness of the fundamental problem of learning conjunctions in a one-sided PAC-esque learning model that has appeared in several forms over the years. In this model we focus on producing a hypothesis that foremost guarantees a small false-positive rate while minimizing the false-negative rate for such hypotheses. Further, we formalize a notion of CSP-refutation reductions and CSP-refutation completeness that and use these, along with candidate CSP-refutatation complete predicates, to provide further evidence for the hardness of several problems

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF
    corecore