554 research outputs found

    Do Hard SAT-Related Reasoning Tasks Become Easier in the Krom Fragment?

    Full text link
    Many reasoning problems are based on the problem of satisfiability (SAT). While SAT itself becomes easy when restricting the structure of the formulas in a certain way, the situation is more opaque for more involved decision problems. We consider here the CardMinSat problem which asks, given a propositional formula ϕ\phi and an atom xx, whether xx is true in some cardinality-minimal model of ϕ\phi. This problem is easy for the Horn fragment, but, as we will show in this paper, remains Θ2\Theta_2-complete (and thus NP\mathrm{NP}-hard) for the Krom fragment (which is given by formulas in CNF where clauses have at most two literals). We will make use of this fact to study the complexity of reasoning tasks in belief revision and logic-based abduction and show that, while in some cases the restriction to Krom formulas leads to a decrease of complexity, in others it does not. We thus also consider the CardMinSat problem with respect to additional restrictions to Krom formulas towards a better understanding of the tractability frontier of such problems

    On the Strength of Uniqueness Quantification in Primitive Positive Formulas

    Get PDF
    Uniqueness quantification (Exists!) is a quantifier in first-order logic where one requires that exactly one element exists satisfying a given property. In this paper we investigate the strength of uniqueness quantification when it is used in place of existential quantification in conjunctive formulas over a given set of relations Gamma, so-called primitive positive definitions (pp-definitions). We fully classify the Boolean sets of relations where uniqueness quantification has the same strength as existential quantification in pp-definitions and give several results valid for arbitrary finite domains. We also consider applications of Exists!-quantified pp-definitions in computer science, which can be used to study the computational complexity of problems where the number of solutions is important. Using our classification we give a new and simplified proof of the trichotomy theorem for the unique satisfiability problem, and prove a general result for the unique constraint satisfaction problem. Studying these problems in a more rigorous framework also turns out to be advantageous in the context of lower bounds, and we relate the complexity of these problems to the exponential-time hypothesis

    Parameterized Complexity and Kernelizability of Max Ones and Exact Ones Problems

    Get PDF
    For a finite set Gamma of Boolean relations, MAX ONES SAT(Gamma) and EXACT ONES SAT(Gamma) are generalized satisfiability problems where every constraint relation is from Gamma, and the task is to find a satisfying assignment with at least/exactly k variables set to 1, respectively. We study the parameterized complexity of these problems, including the question whether they admit polynomial kernels. For MAX ONES SAT(Gamma), we give a classification into five different complexity levels: polynomial-time solvable, admits a polynomial kernel, fixed-parameter tractable, solvable in polynomial time for fixed k, and NP-hard already for k = 1. For EXACT ONES SAT(Gamma), we refine the classification obtained earlier by taking a closer look at the fixed-parameter tractable cases and classifying the sets Gamma for which EXACT ONES SAT(Gamma) admits a polynomial kernel

    The power of primitive positive definitions with polynomially many variables

    Get PDF
    Two well-studied closure operators for relations are based on existentially quantified conjunctive formulas, primitive positive (p.p.) definitions, and primitive positive formulas without existential quantification, quantifier-free primitive positive definitions (q.f.p.p.) definitions. Sets of relations closed under p.p. definitions are known as co-clones and sets of relations closed under q.f.p.p. definitions as weak partial co-clones. The latter do however have limited expressivity, and the corresponding lattice of strong partial clones is of uncountably infinite cardinality even for the Boolean domain. Hence, it is reasonable to consider the expressiveness of p.p. definitions where only a small number of existentially quantified variables are allowed. In this article, we consider p.p. definitions allowing only polynomially many existentially quantified variables, and say that a co-clone closed under such definitions is polynomially closed, and otherwise superpolynomially closed. We investigate properties of polynomially closed co-clones and prove that if the corresponding clone contains a k-ary near-unanimity operation for k amp;gt;= 3, then the co-clone is polynomially closed, and if the clone does not contain a k-edge operation for any k amp;gt;= 2, then the co-clone is superpolynomially closed. For the Boolean domain we strengthen these results and prove a complete dichotomy theorem separating polynomially closed co-clones from superpolynomially closed co-clones. Using these results, we then proceed to investigate properties of strong partial clones corresponding to superpolynomially closed co-clones. We prove that if Gamma is a finite set of relations over an arbitrary finite domain such that the clone corresponding to Gamma is essentially unary, then the strong partial clone corresponding to Gamma is of infinite order and cannot be generated by a finite set of partial functions

    Paradigms for Parameterized Enumeration

    Full text link
    The aim of the paper is to examine the computational complexity and algorithmics of enumeration, the task to output all solutions of a given problem, from the point of view of parameterized complexity. First we define formally different notions of efficient enumeration in the context of parameterized complexity. Second we show how different algorithmic paradigms can be used in order to get parameter-efficient enumeration algorithms in a number of examples. These paradigms use well-known principles from the design of parameterized decision as well as enumeration techniques, like for instance kernelization and self-reducibility. The concept of kernelization, in particular, leads to a characterization of fixed-parameter tractable enumeration problems.Comment: Accepted for MFCS 2013; long version of the pape

    Do clones degenerate over time? Explaining the genetic variability of asexuals through population genetic models

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Quest for understanding the nature of mechanisms governing the life span of clonal organisms lasts for several decades. Phylogenetic evidence for recent origins of most clones is usually interpreted as proof that clones suffer from gradual age-dependent fitness decay (e.g. Muller's ratchet). However, we have shown that a neutral drift can also qualitatively explain the observed distribution of clonal ages. This finding was followed by several attempts to distinguish the effects of neutral and non-neutral processes. Most recently, Neiman et al. 2009 (Ann N Y Acad Sci.:1168:185-200.) reviewed the distribution of asexual lineage ages estimated from a diverse array of taxa and concluded that neutral processes alone may not explain the observed data. Moreover, the authors inferred that similar types of mechanisms determine maximum asexual lineage ages in all asexual taxa. In this paper we review recent methods for distinguishing the effects of neutral and non-neutral processes and point at methodological problems related with them.</p> <p>Results and Discussion</p> <p>We found that contemporary analyses based on phylogenetic data are inadequate to provide any clear-cut answer about the nature and generality of processes affecting evolution of clones. As an alternative approach, we demonstrate that sequence variability in asexual populations is suitable to detect age-dependent selection against clonal lineages. We found that asexual taxa with relatively old clonal lineages are characterised by progressively stronger deviations from neutrality.</p> <p>Conclusions</p> <p>Our results demonstrate that some type of age-dependent selection against clones is generally operational in asexual animals, which cover a wide taxonomic range spanning from flatworms to vertebrates. However, we also found a notable difference between the data distribution predicted by available models of sequence evolution and those observed in empirical data. These findings point at the possibility that processes affecting clonal evolution differ from those described in recent studies, suggesting that theoretical models of asexual populations must evolve to address this problem in detail.</p> <p>Reviewers</p> <p>This article was reviewed by Isa Schön (nominated by John Logsdon), Arcady Mushegian and Timothy G. Barraclough (nominated by Laurence Hurst).</p

    Kernelization of generic problems : upper and lower bounds

    Get PDF
    This thesis addresses the kernelization properties of generic problems, defined via syntactical restrictions or by a problem framework. Polynomial kernelization is a formalization of data reduction, aimed at combinatorially hard problems, which allows a rigorous study of this important and fundamental concept. The thesis is organized into two main parts. In the first part we prove that all problems from two syntactically defined classes of constant-factor approximable problems admit polynomial kernelizations. The problems must be expressible via optimization over first-order formulas with restricted quantification; when relaxing these restrictions we find problems that do not admit polynomial kernelizations. Next, we consider edge modification problems, and we show that they do not generally admit polynomial kernelizations. In the second part we consider three types of Boolean constraint satisfaction problems.We completely characterize whether these problems admit polynomial kernelizations, i.e.,given such a problem our results either provide a polynomial kernelization, or they show that the problem does not admit a polynomial kernelization. These dichotomies are characterized by properties of the permitted constraints.Diese Dissertation beschäftigt sich mit der Kernelisierbarkeit von generischen Problemen, definiert durch syntaktische Beschränkungen oder als Problemsystem. Polynomielle Kernelisierung ist eine Formalisierung des Konzepts der Datenreduktion für kombinatorisch schwierige Probleme. Sie erlaubt eine grüdliche Untersuchung dieses wichtigen und fundamentalen Begriffs. Die Dissertation gliedert sich in zwei Hauptteile. Im ersten Teil beweisen wir, dass alle Probleme aus zwei syntaktischen Teilklassen der Menge aller konstantfaktor-approximierbaren Probleme polynomielle Kernelisierungen haben. Die Probleme müssen durch Optimierung über Formeln in Prädikatenlogik erster Stufe mit beschränkter Quantifizierung beschreibbar sein. Eine Relaxierung dieser Beschränkungen gestattet bereits Probleme, die keine polynomielle Kernelisierung erlauben. Im Anschluss betrachten wir Kantenmodifizierungsprobleme und zeigen, dass diese im Allgemeinen keine polynomielle Kernelisierung haben. Im zweiten Teil betrachten wir drei Arten von booleschen Constraint-Satisfaction-Problemen. Wir charakterisieren vollständig welche dieser Probleme polynomielle Kernelisierungen erlauben. Für jedes gegebene Problem zeigen unsere Resultate entweder eine polynomielle Kernelisierung oder sie zeigen, dass das Problem keine polynomielle Kernelisierung hat. Die Dichotomien sind durch Eigenschaften der erlaubten Constraints charakterisiert

    The kk-XORSAT threshold revisited

    Full text link
    We provide a simplified proof of the random kk-XORSAT satisfiability threshold theorem. As an extension we also determine the full rank threshold for sparse random matrices over finite fields with precisely kk non-zero entries per row. This complements a result from [Ayre, Coja-Oghlan, Gao, M\"uller: Combinatorica 2020]. The proof combines physics-inspired message passing arguments with a surgical moment computation
    corecore