84 research outputs found
An interactive semantics of logic programming
We apply to logic programming some recently emerging ideas from the field of
reduction-based communicating systems, with the aim of giving evidence of the
hidden interactions and the coordination mechanisms that rule the operational
machinery of such a programming paradigm. The semantic framework we have chosen
for presenting our results is tile logic, which has the advantage of allowing a
uniform treatment of goals and observations and of applying abstract
categorical tools for proving the results. As main contributions, we mention
the finitary presentation of abstract unification, and a concurrent and
coordinated abstract semantics consistent with the most common semantics of
logic programming. Moreover, the compositionality of the tile semantics is
guaranteed by standard results, as it reduces to check that the tile systems
associated to logic programs enjoy the tile decomposition property. An
extension of the approach for handling constraint systems is also discussed.Comment: 42 pages, 24 figure, 3 tables, to appear in the CUP journal of Theory
and Practice of Logic Programmin
The -semantics approach; theory and applications
AbstractThis paper is a general overview of an approach to the semantics of logic programs whose aim is to find notions of models which really capture the operational semantics, and are, therefore, useful for defining program equivalences and for semantics-based program analysis. The approach leads to the introduction of extended interpretations which are more expressive than Herbrand interpretations. The semantics in terms of extended interpretations can be obtained as a result of both an operational (top-down) and a fixpoint (bottom-up) construction. It can also be characterized from the model-theoretic viewpoint, by defining a set of extended models which contains standard Herbrand models. We discuss the original construction modeling computed answer substitutions, its compositional version, and various semantics modeling more concrete observables. We then show how the approach can be applied to several extensions of positive logic programs. We finally consider some applications, mainly in the area of semantics-based program transformation and analysis
Transactions and updates in deductive databases
n this paper we develop a new approach providing a smooth integration of extensional updates and declarative query language for deductive databases. The approach is based on a declarative speci cation of updates in rule bodies. Updates are not executed as soon are evaluated. Instead, they are collectedand then applied to the database when the query evaluation is completed. We call this approach non-immediate update semantics. We provide a top down and equivalent bottom-up semantics which re ect the corresponding computation models. We also package set of updates into transactions and we provide a formal semantics for transactions. Then, in order to handle complex transactions, we extend the transaction language with control constructors still perserving formal semantics and semantics equivalence
Transformations of CLP modules
We propose a transformation system for CLP programs and modules. The framework is inspired by the one of Tamaki and Sato for pure logic programs. However, the use of CLP allows us to introduce some new operations such as splitting and constraint replacement. We provide two sets of applicability conditions. The first one guarantees that the original and the transformed programs have the same computational behaviour, in terms of answer constraints. The second set contains more restrictive conditions that ensure compositionality: we prove that under these conditions the original and the transformed modules have the same answer constraints also when they are composed with other modules. This result is proved by first introducing a new formulation, in terms of trees, of a resultants semantics for CLP. As corollaries we obtain the correctness of both the modular and the non-modular system w.r.t. the least model semantics
Coherent Integration of Databases by Abductive Logic Programming
We introduce an abductive method for a coherent integration of independent
data-sources. The idea is to compute a list of data-facts that should be
inserted to the amalgamated database or retracted from it in order to restore
its consistency. This method is implemented by an abductive solver, called
Asystem, that applies SLDNFA-resolution on a meta-theory that relates
different, possibly contradicting, input databases. We also give a pure
model-theoretic analysis of the possible ways to `recover' consistent data from
an inconsistent database in terms of those models of the database that exhibit
as minimal inconsistent information as reasonably possible. This allows us to
characterize the `recovered databases' in terms of the `preferred' (i.e., most
consistent) models of the theory. The outcome is an abductive-based application
that is sound and complete with respect to a corresponding model-based,
preferential semantics, and -- to the best of our knowledge -- is more
expressive (thus more general) than any other implementation of coherent
integration of databases
CP debugging tools: Clarification of functionalities and selection of the tools
Abstract is not available
Recommended from our members
Over-Constrained Systems in CLP and CSP
This thesis is concerned with constraint-based logical and computational frameworks for resolving and relaxing over-constrained systems. The context is provided by two such frameworks, Hierarchical Constraint Logic Programming (HCLP) and Partial Constraint Satisfaction Problem techniques (PCSP), both of which have been extensively discussed in the literature. Our work is driven by the reasons why over-constrained systems arise, the characteristics of the ‘ideal’ paradigm for resolving them, and the issue of compositionality which is very important in general if we wish to examine large systems by examining and then combining smaller parts. We abstract away from programming language issues in order to focus on constraint solving.
The main original work of this thesis is divided into three parts. Firstly we present a complete method for transforming between the HCLP and PCSP representations of a problem, thus showing that theoretically they have equivalent expressive power. Secondly, having discussed compositionality in general, we present a two-stage variant of HCLP; the first stage is compositional but calculates a superset of the solutions we expect from HCLP. The second stage removes precisely those solutions which are not acceptable to HCLP, but at the cost of re-introducing HCLP’s non-compositional behaviour. We also discuss the compositional aspects of PCSP.
The third part of this thesis presents Gocs, our system which allows the use of both the HCLP and PCSP approaches to problem relaxation and ordering. The Gocs integrated framework has HCLP and PCSP as special cases and also subsumes all of their separate advantages, when considering the characteristics of the ideal system for relaxing and resolving over-constrained problems. We present examples throughout the thesis, some of which are comparative and so may be used to substantiate our claims. Finally, we present conclusions and discuss further work
Grammar, Ontology, and the Unity of Meaning
Words have meaning. Sentences also have meaning, but their meaning is different in kind from any collection of the meanings of the words they contain. I discuss two puzzles related to this difference. The first is how the meanings of the parts of a sentence combine to give rise to a unified sentential meaning, as opposed to a mere collection of disparate meanings (UP1). The second is why the formal ontology of linguistic meaning changes when grammatical structure is built up (UP2). For example, the meaning of a sentence is a proposition evaluable for truth and falsity. In contrast, a collection of the meanings of its parts does not constitute a proposition and is not evaluable for truth. These two puzzles are closely related, since change in formal ontology is the clearest sign of the unity of meaning. The most popular strategy for answering them is taking the meanings of the parts as abstractions from primitive sentence meanings. However, I argue that, given plausible psychological constraints, sentence meanings cannot be taken as explanatory primitives.
Drawing on recent work in Generative Grammar and its philosophy, I suggest that the key to both unity questions is to distinguish strictly between lexical and grammatical meaning. The latter is irreducible and determines how lexical content is used in referential acts. I argue that these referential properties determine a formal ontology, which explains why and how formal ontology changes when grammatical structure is built up (UP2). As for UP1, I suggest that, strictly speaking, lexical meanings never combine. Instead, whenever grammar specifies a formal ontology for the lexical meanings entering a grammatical derivation, further lexical (or phrasal) meanings can only specify aspects of this recursive grammatical process. In this way, contemporary grammatical theory can be used to address old philosophical problems
Towards CIAO-Prolog - A parallel concurrent constraint system
Abstract is not available
- …