232,387 research outputs found

    Embedding Non-Ground Logic Programs into Autoepistemic Logic for Knowledge Base Combination

    Full text link
    In the context of the Semantic Web, several approaches to the combination of ontologies, given in terms of theories of classical first-order logic and rule bases, have been proposed. They either cast rules into classical logic or limit the interaction between rules and ontologies. Autoepistemic logic (AEL) is an attractive formalism which allows to overcome these limitations, by serving as a uniform host language to embed ontologies and nonmonotonic logic programs into it. For the latter, so far only the propositional setting has been considered. In this paper, we present three embeddings of normal and three embeddings of disjunctive non-ground logic programs under the stable model semantics into first-order AEL. While the embeddings all correspond with respect to objective ground atoms, differences arise when considering non-atomic formulas and combinations with first-order theories. We compare the embeddings with respect to stable expansions and autoepistemic consequences, considering the embeddings by themselves, as well as combinations with classical theories. Our results reveal differences and correspondences of the embeddings and provide useful guidance in the choice of a particular embedding for knowledge combination.Comment: 52 pages, submitte

    Normal forms for Answer Sets Programming

    Full text link
    Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the {\em kernel} of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called {\em 3-kernel.} A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e., the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article \cite{Cos04b}.Comment: 15 pages, To appear in Theory and Practice of Logic Programming (TPLP

    On finitely recursive programs

    Full text link
    Disjunctive finitary programs are a class of logic programs admitting function symbols and hence infinite domains. They have very good computational properties, for example ground queries are decidable while in the general case the stable model semantics is highly undecidable. In this paper we prove that a larger class of programs, called finitely recursive programs, preserves most of the good properties of finitary programs under the stable model semantics, namely: (i) finitely recursive programs enjoy a compactness property; (ii) inconsistency checking and skeptical reasoning are semidecidable; (iii) skeptical resolution is complete for normal finitely recursive programs. Moreover, we show how to check inconsistency and answer skeptical queries using finite subsets of the ground program instantiation. We achieve this by extending the splitting sequence theorem by Lifschitz and Turner: We prove that if the input program P is finitely recursive, then the partial stable models determined by any smooth splitting omega-sequence converge to a stable model of P.Comment: 26 pages, Preliminary version in Proc. of ICLP 2007, Best paper awar

    Complexity of fuzzy answer set programming under Łukasiewicz semantics

    Get PDF
    Fuzzy answer set programming (FASP) is a generalization of answer set programming (ASP) in which propositions are allowed to be graded. Little is known about the computational complexity of FASP and almost no techniques are available to compute the answer sets of a FASP program. In this paper, we analyze the computational complexity of FASP under Łukasiewicz semantics. In particular we show that the complexity of the main reasoning tasks is located at the first level of the polynomial hierarchy, even for disjunctive FASP programs for which reasoning is classically located at the second level. Moreover, we show a reduction from reasoning with such FASP programs to bilevel linear programming, thus opening the door to practical applications. For definite FASP programs we can show P-membership. Surprisingly, when allowing disjunctions to occur in the body of rules – a syntactic generalization which does not affect the expressivity of ASP in the classical case – the picture changes drastically. In particular, reasoning tasks are then located at the second level of the polynomial hierarchy, while for simple FASP programs, we can only show that the unique answer set can be found in pseudo-polynomial time. Moreover, the connection to an existing open problem about integer equations suggests that the problem of fully characterizing the complexity of FASP in this more general setting is not likely to have an easy solution

    A Common View on Strong, Uniform, and Other Notions of Equivalence in Answer-Set Programming

    Full text link
    Logic programming under the answer-set semantics nowadays deals with numerous different notions of program equivalence. This is due to the fact that equivalence for substitution (known as strong equivalence) and ordinary equivalence are different concepts. The former holds, given programs P and Q, iff P can be faithfully replaced by Q within any context R, while the latter holds iff P and Q provide the same output, that is, they have the same answer sets. Notions in between strong and ordinary equivalence have been introduced as theoretical tools to compare incomplete programs and are defined by either restricting the syntactic structure of the considered context programs R or by bounding the set A of atoms allowed to occur in R (relativized equivalence).For the latter approach, different A yield properly different equivalence notions, in general. For the former approach, however, it turned out that any ``reasonable'' syntactic restriction to R coincides with either ordinary, strong, or uniform equivalence. In this paper, we propose a parameterization for equivalence notions which takes care of both such kinds of restrictions simultaneously by bounding, on the one hand, the atoms which are allowed to occur in the rule heads of the context and, on the other hand, the atoms which are allowed to occur in the rule bodies of the context. We introduce a general semantical characterization which includes known ones as SE-models (for strong equivalence) or UE-models (for uniform equivalence) as special cases. Moreover,we provide complexity bounds for the problem in question and sketch a possible implementation method. To appear in Theory and Practice of Logic Programming (TPLP)

    Towards a unified theory of logic programming semantics: Level mapping characterizations of selector generated models

    Full text link
    Currently, the variety of expressive extensions and different semantics created for logic programs with negation is diverse and heterogeneous, and there is a lack of comprehensive comparative studies which map out the multitude of perspectives in a uniform way. Most recently, however, new methodologies have been proposed which allow one to derive uniform characterizations of different declarative semantics for logic programs with negation. In this paper, we study the relationship between two of these approaches, namely the level mapping characterizations due to [Hitzler and Wendt 2005], and the selector generated models due to [Schwarz 2004]. We will show that the latter can be captured by means of the former, thereby supporting the claim that level mappings provide a very flexible framework which is applicable to very diversely defined semantics.Comment: 17 page
    corecore