275 research outputs found

    MaxSAT Resolution and Subcube Sums

    Full text link
    We study the MaxRes rule in the context of certifying unsatisfiability. We show that it can be exponentially more powerful than tree-like resolution, and when augmented with weakening (the system MaxResW), p-simulates tree-like resolution. In devising a lower bound technique specific to MaxRes (and not merely inheriting lower bounds from Res), we define a new proof system called the SubCubeSums proof system. This system, which p-simulates MaxResW, can be viewed as a special case of the semialgebraic Sherali-Adams proof system. In expressivity, it is the integral restriction of conical juntas studied in the contexts of communication complexity and extension complexity. We show that it is not simulated by Res. Using a proof technique qualitatively different from the lower bounds that MaxResW inherits from Res, we show that Tseitin contradictions on expander graphs are hard to refute in SubCubeSums. We also establish a lower bound technique via lifting: for formulas requiring large degree in SubCubeSums, their XOR-ification requires large size in SubCubeSums

    Resolution over Linear Equations and Multilinear Proofs

    Get PDF
    We develop and study the complexity of propositional proof systems of varying strength extending resolution by allowing it to operate with disjunctions of linear equations instead of clauses. We demonstrate polynomial-size refutations for hard tautologies like the pigeonhole principle, Tseitin graph tautologies and the clique-coloring tautologies in these proof systems. Using the (monotone) interpolation by a communication game technique we establish an exponential-size lower bound on refutations in a certain, considerably strong, fragment of resolution over linear equations, as well as a general polynomial upper bound on (non-monotone) interpolants in this fragment. We then apply these results to extend and improve previous results on multilinear proofs (over fields of characteristic 0), as studied in [RazTzameret06]. Specifically, we show the following: 1. Proofs operating with depth-3 multilinear formulas polynomially simulate a certain, considerably strong, fragment of resolution over linear equations. 2. Proofs operating with depth-3 multilinear formulas admit polynomial-size refutations of the pigeonhole principle and Tseitin graph tautologies. The former improve over a previous result that established small multilinear proofs only for the \emph{functional} pigeonhole principle. The latter are different than previous proofs, and apply to multilinear proofs of Tseitin mod p graph tautologies over any field of characteristic 0. We conclude by connecting resolution over linear equations with extensions of the cutting planes proof system.Comment: 44 page

    Identifying Security-Critical Cyber-Physical Components in Industrial Control Systems

    Get PDF
    In recent years, Industrial Control Systems (ICS) have become an appealing target for cyber attacks, having massive destructive consequences. Security metrics are therefore essential to assess their security posture. In this paper, we present a novel ICS security metric based on AND/OR graphs that represent cyber-physical dependencies among network components. Our metric is able to efficiently identify sets of critical cyber-physical components, with minimal cost for an attacker, such that if compromised, the system would enter into a non-operational state. We address this problem by efficiently transforming the input AND/OR graph-based model into a weighted logical formula that is then used to build and solve a Weighted Partial MAX-SAT problem. Our tool, META4ICS, leverages state-of-the-art techniques from the field of logical satisfiability optimisation in order to achieve efficient computation times. Our experimental results indicate that the proposed security metric can efficiently scale to networks with thousands of nodes and be computed in seconds. In addition, we present a case study where we have used our system to analyse the security posture of a realistic water transport network. We discuss our findings on the plant as well as further security applications of our metric.Comment: Keywords: Security metrics, industrial control systems, cyber-physical systems, AND-OR graphs, MAX-SAT resolutio

    On SAT representations of XOR constraints

    Full text link
    We study the representation of systems S of linear equations over the two-element field (aka xor- or parity-constraints) via conjunctive normal forms F (boolean clause-sets). First we consider the problem of finding an "arc-consistent" representation ("AC"), meaning that unit-clause propagation will fix all forced assignments for all possible instantiations of the xor-variables. Our main negative result is that there is no polysize AC-representation in general. On the positive side we show that finding such an AC-representation is fixed-parameter tractable (fpt) in the number of equations. Then we turn to a stronger criterion of representation, namely propagation completeness ("PC") --- while AC only covers the variables of S, now all the variables in F (the variables in S plus auxiliary variables) are considered for PC. We show that the standard translation actually yields a PC representation for one equation, but fails so for two equations (in fact arbitrarily badly). We show that with a more intelligent translation we can also easily compute a translation to PC for two equations. We conjecture that computing a representation in PC is fpt in the number of equations.Comment: 39 pages; 2nd v. improved handling of acyclic systems, free-standing proof of the transformation from AC-representations to monotone circuits, improved wording and literature review; 3rd v. updated literature, strengthened treatment of monotonisation, improved discussions; 4th v. update of literature, discussions and formulations, more details and examples; conference v. to appear LATA 201

    From Small Space to Small Width in Resolution

    Get PDF
    In 2003, Atserias and Dalmau resolved a major open question about the resolution proof system by establishing that the space complexity of CNF formulas is always an upper bound on the width needed to refute them. Their proof is beautiful but somewhat mysterious in that it relies heavily on tools from finite model theory. We give an alternative, completely elementary proof that works by simple syntactic manipulations of resolution refutations. As a by-product, we develop a "black-box" technique for proving space lower bounds via a "static" complexity measure that works against any resolution refutation---previous techniques have been inherently adaptive. We conclude by showing that the related question for polynomial calculus (i.e., whether space is an upper bound on degree) seems unlikely to be resolvable by similar methods

    Hardness measures and resolution lower bounds

    Full text link
    Various "hardness" measures have been studied for resolution, providing theoretical insight into the proof complexity of resolution and its fragments, as well as explanations for the hardness of instances in SAT solving. In this report we aim at a unified view of a number of hardness measures, including different measures of width, space and size of resolution proofs. We also extend these measures to all clause-sets (possibly satisfiable).Comment: 43 pages, preliminary version (yet the application part is only sketched, with proofs missing

    Lazy Model Expansion: Interleaving Grounding with Search

    Full text link
    Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages, like ASP, FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding-on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach
    corecore