84 research outputs found

    An interactive semantics of logic programming

    Full text link
    We apply to logic programming some recently emerging ideas from the field of reduction-based communicating systems, with the aim of giving evidence of the hidden interactions and the coordination mechanisms that rule the operational machinery of such a programming paradigm. The semantic framework we have chosen for presenting our results is tile logic, which has the advantage of allowing a uniform treatment of goals and observations and of applying abstract categorical tools for proving the results. As main contributions, we mention the finitary presentation of abstract unification, and a concurrent and coordinated abstract semantics consistent with the most common semantics of logic programming. Moreover, the compositionality of the tile semantics is guaranteed by standard results, as it reduces to check that the tile systems associated to logic programs enjoy the tile decomposition property. An extension of the approach for handling constraint systems is also discussed.Comment: 42 pages, 24 figure, 3 tables, to appear in the CUP journal of Theory and Practice of Logic Programmin

    The ss-semantics approach; theory and applications

    Get PDF
    AbstractThis paper is a general overview of an approach to the semantics of logic programs whose aim is to find notions of models which really capture the operational semantics, and are, therefore, useful for defining program equivalences and for semantics-based program analysis. The approach leads to the introduction of extended interpretations which are more expressive than Herbrand interpretations. The semantics in terms of extended interpretations can be obtained as a result of both an operational (top-down) and a fixpoint (bottom-up) construction. It can also be characterized from the model-theoretic viewpoint, by defining a set of extended models which contains standard Herbrand models. We discuss the original construction modeling computed answer substitutions, its compositional version, and various semantics modeling more concrete observables. We then show how the approach can be applied to several extensions of positive logic programs. We finally consider some applications, mainly in the area of semantics-based program transformation and analysis

    Transactions and updates in deductive databases

    Get PDF
    n this paper we develop a new approach providing a smooth integration of extensional updates and declarative query language for deductive databases. The approach is based on a declarative speci cation of updates in rule bodies. Updates are not executed as soon are evaluated. Instead, they are collectedand then applied to the database when the query evaluation is completed. We call this approach non-immediate update semantics. We provide a top down and equivalent bottom-up semantics which re ect the corresponding computation models. We also package set of updates into transactions and we provide a formal semantics for transactions. Then, in order to handle complex transactions, we extend the transaction language with control constructors still perserving formal semantics and semantics equivalence

    Transformations of CLP modules

    Get PDF
    We propose a transformation system for CLP programs and modules. The framework is inspired by the one of Tamaki and Sato for pure logic programs. However, the use of CLP allows us to introduce some new operations such as splitting and constraint replacement. We provide two sets of applicability conditions. The first one guarantees that the original and the transformed programs have the same computational behaviour, in terms of answer constraints. The second set contains more restrictive conditions that ensure compositionality: we prove that under these conditions the original and the transformed modules have the same answer constraints also when they are composed with other modules. This result is proved by first introducing a new formulation, in terms of trees, of a resultants semantics for CLP. As corollaries we obtain the correctness of both the modular and the non-modular system w.r.t. the least model semantics

    Coherent Integration of Databases by Abductive Logic Programming

    Full text link
    We introduce an abductive method for a coherent integration of independent data-sources. The idea is to compute a list of data-facts that should be inserted to the amalgamated database or retracted from it in order to restore its consistency. This method is implemented by an abductive solver, called Asystem, that applies SLDNFA-resolution on a meta-theory that relates different, possibly contradicting, input databases. We also give a pure model-theoretic analysis of the possible ways to `recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. This allows us to characterize the `recovered databases' in terms of the `preferred' (i.e., most consistent) models of the theory. The outcome is an abductive-based application that is sound and complete with respect to a corresponding model-based, preferential semantics, and -- to the best of our knowledge -- is more expressive (thus more general) than any other implementation of coherent integration of databases

    Grammar, Ontology, and the Unity of Meaning

    Get PDF
    Words have meaning. Sentences also have meaning, but their meaning is different in kind from any collection of the meanings of the words they contain. I discuss two puzzles related to this difference. The first is how the meanings of the parts of a sentence combine to give rise to a unified sentential meaning, as opposed to a mere collection of disparate meanings (UP1). The second is why the formal ontology of linguistic meaning changes when grammatical structure is built up (UP2). For example, the meaning of a sentence is a proposition evaluable for truth and falsity. In contrast, a collection of the meanings of its parts does not constitute a proposition and is not evaluable for truth. These two puzzles are closely related, since change in formal ontology is the clearest sign of the unity of meaning. The most popular strategy for answering them is taking the meanings of the parts as abstractions from primitive sentence meanings. However, I argue that, given plausible psychological constraints, sentence meanings cannot be taken as explanatory primitives. Drawing on recent work in Generative Grammar and its philosophy, I suggest that the key to both unity questions is to distinguish strictly between lexical and grammatical meaning. The latter is irreducible and determines how lexical content is used in referential acts. I argue that these referential properties determine a formal ontology, which explains why and how formal ontology changes when grammatical structure is built up (UP2). As for UP1, I suggest that, strictly speaking, lexical meanings never combine. Instead, whenever grammar specifies a formal ontology for the lexical meanings entering a grammatical derivation, further lexical (or phrasal) meanings can only specify aspects of this recursive grammatical process. In this way, contemporary grammatical theory can be used to address old philosophical problems

    Towards CIAO-Prolog - A parallel concurrent constraint system

    Get PDF
    Abstract is not available
    corecore