31 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Algorithms and Certificates for Boolean CSP Refutation: "Smoothed is no harder than Random"
We present an algorithm for strongly refuting smoothed instances of all
Boolean CSPs. The smoothed model is a hybrid between worst and average-case
input models, where the input is an arbitrary instance of the CSP with only the
negation patterns of the literals re-randomized with some small probability.
For an -variable smoothed instance of a -arity CSP, our algorithm runs in
time, and succeeds with high probability in bounding the optimum
fraction of satisfiable constraints away from , provided that the number of
constraints is at least . This
matches, up to polylogarithmic factors in , the trade-off between running
time and the number of constraints of the state-of-the-art algorithms for
refuting fully random instances of CSPs [RRS17].
We also make a surprising new connection between our algorithm and even
covers in hypergraphs, which we use to positively resolve Feige's 2008
conjecture, an extremal combinatorics conjecture on the existence of even
covers in sufficiently dense hypergraphs that generalizes the well-known Moore
bound for the girth of graphs. As a corollary, we show that polynomial-size
refutation witnesses exist for arbitrary smoothed CSP instances with number of
constraints a polynomial factor below the "spectral threshold" of ,
extending the celebrated result for random 3-SAT of Feige, Kim and Ofek
[FKO06]
Information in propositional proofs and algorithmic proof search
We study from the proof complexity perspective the (informal) proof search
problem:
Is there an optimal way to search for propositional proofs?
We note that for any fixed proof system there exists a time-optimal proof
search algorithm. Using classical proof complexity results about reflection
principles we prove that a time-optimal proof search algorithm exists w.r.t.
all proof systems iff a p-optimal proof system exists.
To characterize precisely the time proof search algorithms need for
individual formulas we introduce a new proof complexity measure based on
algorithmic information concepts. In particular, to a proof system we
attach {\bf information-efficiency function} assigning to a
tautology a natural number, and we show that:
- characterizes time any -proof search algorithm has to use on
and that for a fixed there is such an information-optimal algorithm,
- a proof system is information-efficiency optimal iff it is p-optimal,
- for non-automatizable systems there are formulas with short
proofs but having large information measure .
We isolate and motivate the problem to establish {\em unconditional}
super-logarithmic lower bounds for where no super-polynomial size
lower bounds are known. We also point out connections of the new measure with
some topics in proof complexity other than proof search.Comment: Preliminary version February 202
CSP-Completeness And Its Applications
We build off of previous ideas used to study both reductions between CSPrefutation problems and improper learning and between CSP-refutation problems themselves to expand some hardness results that depend on the assumption that refuting random CSP instances are hard for certain choices of predicates (like k-SAT). First, we are able argue the hardness of the fundamental problem of learning conjunctions in a one-sided PAC-esque learning model that has appeared in several forms over the years. In this model we focus on producing a hypothesis that foremost guarantees a small false-positive rate while minimizing the false-negative rate for such hypotheses. Further, we formalize a notion of CSP-refutation reductions and CSP-refutation completeness that and use these, along with candidate CSP-refutatation complete predicates, to provide further evidence for the hardness of several problems
Pseudo-contractions as Gentle Repairs
Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas