87 research outputs found

    Complexity of Nested Circumscription and Nested Abnormality Theories

    Full text link
    The need for a circumscriptive formalism that allows for simple yet elegant modular problem representation has led Lifschitz (AIJ, 1995) to introduce nested abnormality theories (NATs) as a tool for modular knowledge representation, tailored for applying circumscription to minimize exceptional circumstances. Abstracting from this particular objective, we propose L_{CIRC}, which is an extension of generic propositional circumscription by allowing propositional combinations and nesting of circumscriptive theories. As shown, NATs are naturally embedded into this language, and are in fact of equal expressive capability. We then analyze the complexity of L_{CIRC} and NATs, and in particular the effect of nesting. The latter is found to be a source of complexity, which climbs the Polynomial Hierarchy as the nesting depth increases and reaches PSPACE-completeness in the general case. We also identify meaningful syntactic fragments of NATs which have lower complexity. In particular, we show that the generalization of Horn circumscription in the NAT framework remains CONP-complete, and that Horn NATs without fixed letters can be efficiently transformed into an equivalent Horn CNF, which implies polynomial solvability of principal reasoning tasks. Finally, we also study extensions of NATs and briefly address the complexity in the first-order case. Our results give insight into the ``cost'' of using L_{CIRC} (resp. NATs) as a host language for expressing other formalisms such as action theories, narratives, or spatial theories.Comment: A preliminary abstract of this paper appeared in Proc. Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-01), pages 169--174. Morgan Kaufmann, 200

    Representing First-Order Causal Theories by Logic Programs

    Get PDF
    Nonmonotonic causal logic, introduced by Norman McCain and Hudson Turner, became a basis for the semantics of several expressive action languages. McCain's embedding of definite propositional causal theories into logic programming paved the way to the use of answer set solvers for answering queries about actions described in such languages. In this paper we extend this embedding to nondefinite theories and to first-order causal logic.Comment: 29 pages. To appear in Theory and Practice of Logic Programming (TPLP); Theory and Practice of Logic Programming, May, 201

    System f2lp – computing answer sets of first-order formulas

    Get PDF
    Abstract. We present an implementation of the general language of stable models proposed by Ferraris, Lee and Lifschitz. Under certain conditions, system f2lp turns a first-order theory under the stable model semantics into an answer set program, so that existing answer set solvers can be used for computing the general language. Quantifiers are first eliminated and then the resulting quantifier-free formulas are turned into rules. Based on the relationship between stable models and circumscription, f2lp can also serve as a reasoning engine for general circumscriptive theories. We illustrate how to use f2lp to compute the circumscriptive event calculus.

    Complexity of Non-Monotonic Logics

    Full text link
    Over the past few decades, non-monotonic reasoning has developed to be one of the most important topics in computational logic and artificial intelligence. Different ways to introduce non-monotonic aspects to classical logic have been considered, e.g., extension with default rules, extension with modal belief operators, or modification of the semantics. In this survey we consider a logical formalism from each of the above possibilities, namely Reiter's default logic, Moore's autoepistemic logic and McCarthy's circumscription. Additionally, we consider abduction, where one is not interested in inferences from a given knowledge base but in computing possible explanations for an observation with respect to a given knowledge base. Complexity results for different reasoning tasks for propositional variants of these logics have been studied already in the nineties. In recent years, however, a renewed interest in complexity issues can be observed. One current focal approach is to consider parameterized problems and identify reasonable parameters that allow for FPT algorithms. In another approach, the emphasis lies on identifying fragments, i.e., restriction of the logical language, that allow more efficient algorithms for the most important reasoning tasks. In this survey we focus on this second aspect. We describe complexity results for fragments of logical languages obtained by either restricting the allowed set of operators (e.g., forbidding negations one might consider only monotone formulae) or by considering only formulae in conjunctive normal form but with generalized clause types. The algorithmic problems we consider are suitable variants of satisfiability and implication in each of the logics, but also counting problems, where one is not only interested in the existence of certain objects (e.g., models of a formula) but asks for their number.Comment: To appear in Bulletin of the EATC

    Algorithmic correspondence and completeness in modal logic. I. The core algorithm SQEMA

    Full text link
    Modal formulae express monadic second-order properties on Kripke frames, but in many important cases these have first-order equivalents. Computing such equivalents is important for both logical and computational reasons. On the other hand, canonicity of modal formulae is important, too, because it implies frame-completeness of logics axiomatized with canonical formulae. Computing a first-order equivalent of a modal formula amounts to elimination of second-order quantifiers. Two algorithms have been developed for second-order quantifier elimination: SCAN, based on constraint resolution, and DLS, based on a logical equivalence established by Ackermann. In this paper we introduce a new algorithm, SQEMA, for computing first-order equivalents (using a modal version of Ackermann's lemma) and, moreover, for proving canonicity of modal formulae. Unlike SCAN and DLS, it works directly on modal formulae, thus avoiding Skolemization and the subsequent problem of unskolemization. We present the core algorithm and illustrate it with some examples. We then prove its correctness and the canonicity of all formulae on which the algorithm succeeds. We show that it succeeds not only on all Sahlqvist formulae, but also on the larger class of inductive formulae, introduced in our earlier papers. Thus, we develop a purely algorithmic approach to proving canonical completeness in modal logic and, in particular, establish one of the most general completeness results in modal logic so far.Comment: 26 pages, no figures, to appear in the Logical Methods in Computer Scienc

    Dual Forgetting Operators in the Context of Weakest Sufficient and Strongest Necessary Conditions

    Full text link
    Forgetting is an important concept in knowledge representation and automated reasoning with widespread applications across a number of disciplines. A standard forgetting operator, characterized in [Lin and Reiter'94] in terms of model-theoretic semantics and primarily focusing on the propositional case, opened up a new research subarea. In this paper, a new operator called weak forgetting, dual to standard forgetting, is introduced and both together are shown to offer a new more uniform perspective on forgetting operators in general. Both the weak and standard forgetting operators are characterized in terms of entailment and inference, rather than a model theoretic semantics. This naturally leads to a useful algorithmic perspective based on quantifier elimination and the use of Ackermman's Lemma and its fixpoint generalization. The strong formal relationship between standard forgetting and strongest necessary conditions and weak forgetting and weakest sufficient conditions is also characterized quite naturally through the entailment-based, inferential perspective used. The framework used to characterize the dual forgetting operators is also generalized to the first-order case and includes useful algorithms for computing first-order forgetting operators in special cases. Practical examples are also included to show the importance of both weak and standard forgetting in modeling and representation
    • …
    corecore