3,582 research outputs found

    On SAT representations of XOR constraints

    Full text link
    We study the representation of systems S of linear equations over the two-element field (aka xor- or parity-constraints) via conjunctive normal forms F (boolean clause-sets). First we consider the problem of finding an "arc-consistent" representation ("AC"), meaning that unit-clause propagation will fix all forced assignments for all possible instantiations of the xor-variables. Our main negative result is that there is no polysize AC-representation in general. On the positive side we show that finding such an AC-representation is fixed-parameter tractable (fpt) in the number of equations. Then we turn to a stronger criterion of representation, namely propagation completeness ("PC") --- while AC only covers the variables of S, now all the variables in F (the variables in S plus auxiliary variables) are considered for PC. We show that the standard translation actually yields a PC representation for one equation, but fails so for two equations (in fact arbitrarily badly). We show that with a more intelligent translation we can also easily compute a translation to PC for two equations. We conjecture that computing a representation in PC is fpt in the number of equations.Comment: 39 pages; 2nd v. improved handling of acyclic systems, free-standing proof of the transformation from AC-representations to monotone circuits, improved wording and literature review; 3rd v. updated literature, strengthened treatment of monotonisation, improved discussions; 4th v. update of literature, discussions and formulations, more details and examples; conference v. to appear LATA 201

    Constraint-Driven Fault Diagnosis

    Get PDF
    Constraint-Driven Fault Diagnosis (CDD) is based on the concept of constraint suspension [6], which was proposed as an approach to fault detection and diagnosis. In this chapter, its capabilities are demonstrated by describing how it might be applied to hardware systems. With this idea, a model-based fault diagnosis problem may be considered as a Constraint Satisfaction Problem (CSP) in order to detect any unexpected behavior and Constraint Satisfaction Optimization Problem (COP) constraint optimization problem in order to identify the reason for any unexpected behavior because the parsimony principle is taken into accountMinisterio de Ciencia y Tecnología TIN2015-63502-C3-2-

    Inconsistency Measurement based on Variables in Minimal Unsatisfiable Subsets

    No full text
    International audienceMeasuring inconsistency degrees of knowledge bases (KBs) provides important context information for facilitating inconsistency handling. Several semantic and syntax based measures have been proposed separately. In this paper, we propose a new way to define inconsistency measurements by combining semantic and syntax based approaches. It is based on counting the variables of minimal unsatisfiable subsets (MUSes) and minimal correction subsets (MCSes), which leads to two equivalent inconsistency degrees, named IDMUS and IDMCS. We give the theoretical and experimental comparisons between them and two purely semantic-based inconsistency degrees: 4-valued and the Quasi Classical semantics based inconsistency degrees. More- over, the computational complexities related to our new inconsistency measurements are studied. As it turns out that computing the exact inconsistency degrees is intractable in general, we then propose and evaluate an anytime algorithm to make IDMUS and IDMCS usable in knowledge management applications. In particular, as most of syntax based measures tend to be difficult to compute in reality due to the exponential number of MUSes, our new inconsistency measures are practical because the numbers of variables in MUSes are often limited or easily to be approximated. We evaluate our approach on the DC benchmark. Our encourag- ing experimental results show that these new inconsistency measure- ments or their approximations are efficient to handle large knowledge bases and to better distinguish inconsistent knowledge bases

    Towards Large-scale Inconsistency Measurement

    Full text link
    We investigate the problem of inconsistency measurement on large knowledge bases by considering stream-based inconsistency measurement, i.e., we investigate inconsistency measures that cannot consider a knowledge base as a whole but process it within a stream. For that, we present, first, a novel inconsistency measure that is apt to be applied to the streaming case and, second, stream-based approximations for the new and some existing inconsistency measures. We conduct an extensive empirical analysis on the behavior of these inconsistency measures on large knowledge bases, in terms of runtime, accuracy, and scalability. We conclude that for two of these measures, the approximation of the new inconsistency measure and an approximation of the contension inconsistency measure, large-scale inconsistency measurement is feasible.Comment: International Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014), co-located with the 21st European Conference on Artificial Intelligence (ECAI 2014). Proceedings of the International Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014), pages 63-70, technical report, ISSN 1430-3701, Leipzig University, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-15056

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Interpolation Properties and SAT-based Model Checking

    Full text link
    Craig interpolation is a widespread method in verification, with important applications such as Predicate Abstraction, CounterExample Guided Abstraction Refinement and Lazy Abstraction With Interpolants. Most state-of-the-art model checking techniques based on interpolation require collections of interpolants to satisfy particular properties, to which we refer as "collectives"; they do not hold in general for all interpolation systems and have to be established for each particular system and verification environment. Nevertheless, no systematic approach exists that correlates the individual interpolation systems and compares the necessary collectives. This paper proposes a uniform framework, which encompasses (and generalizes) the most common collectives exploited in verification. We use it for a systematic study of the collectives and of the constraints they pose on propositional interpolation systems used in SAT-based model checking
    corecore