15 research outputs found

    A Generic Module System forWeb Rule Languages: Divide and Rule

    Get PDF
    An essential feature in practically usable programming languages is the ability to encapsulate functionality in reusable modules. Modules make large scale projects tractable by humans. For Web and Semantic Web programming, many rule-based languages, e.g. XSLT, CSS, Xcerpt, SWRL, SPARQL, and RIF Core, have evolved or are currently evolving. Rules are easy to comprehend and specify, even for non-technical users, e.g. business managers, hence easing the contributions to the Web. Unfortunately, those contributions are arguably doomed to exist in isolation as most rule languages are conceived without modularity, hence without an easy mechanism for integration and reuse. In this paper a generic module system applicable to many rule languages is presented. We demonstrate and apply our generic module system to a Datalog-like rule language, close in spirit to RIF Core. The language is gently introduced along the EU-Rent use case. Using the Reuseware Composition Framework, the module system for a concrete language can be achieved almost for free, if it adheres to the formal notions introduced in this paper

    An interactive semantics of logic programming

    Full text link
    We apply to logic programming some recently emerging ideas from the field of reduction-based communicating systems, with the aim of giving evidence of the hidden interactions and the coordination mechanisms that rule the operational machinery of such a programming paradigm. The semantic framework we have chosen for presenting our results is tile logic, which has the advantage of allowing a uniform treatment of goals and observations and of applying abstract categorical tools for proving the results. As main contributions, we mention the finitary presentation of abstract unification, and a concurrent and coordinated abstract semantics consistent with the most common semantics of logic programming. Moreover, the compositionality of the tile semantics is guaranteed by standard results, as it reduces to check that the tile systems associated to logic programs enjoy the tile decomposition property. An extension of the approach for handling constraint systems is also discussed.Comment: 42 pages, 24 figure, 3 tables, to appear in the CUP journal of Theory and Practice of Logic Programmin

    αCheck: a mechanized metatheory model-checker

    Get PDF
    The problem of mechanically formalizing and proving metatheoretic properties of programming language calculi, type systems, operational semantics, and related formal systems has received considerable attention recently. However, the dual problem of searching for errors in such formalizations has attracted comparatively little attention. In this article, we present α\alphaCheck, a bounded model-checker for metatheoretic properties of formal systems specified using nominal logic. In contrast to the current state of the art for metatheory verification, our approach is fully automatic, does not require expertise in theorem proving on the part of the user, and produces counterexamples in the case that a flaw is detected. We present two implementations of this technique, one based on negation-as-failure and one based on negation elimination, along with experimental results showing that these techniques are fast enough to be used interactively to debug systems as they are developed.Comment: Under consideration for publication in Theory and Practice of Logic Programming (TPLP

    Logic programming and negation: a survey

    Get PDF

    Integrity constraints in deductive databases

    Get PDF
    A deductive database is a logic program that generalises the concept of a relational database. Integrity constraints are properties that the data of a database are required to satisfy and in the context of logic programming, they are expressed as closed formulae. It is desirable to check the integrity of a database at the end of each transaction which changes the database. The simplest approach to checking integrity in a database involves the evaluation of each constraint whenever the database is updated. However, such an approach is too inefficient, especially for large databases, and does not make use of the fact that the database satisfies the constraints prior to the update. A method, called the path finding method, is proposed for checking integrity in definite deductive databases by considering constraints as closed first order formulae. A comparative evaluation is made among previously described methods and the proposed one. Closed general formulae is used to express aggregate constraints and Lloyd et al. 's simplification method is generalised to cope with these constraints. A new definition of constraint satisfiability is introduced in the case of indefinite deductive databases and the path finding method is generalised to check integrity in the presence of static constraints only. To evaluate a constraint in an indefinite deductive database to take full advantage of the query evaluation mechanism underlying the database, a query evaluator is proposed which is based on a definition of semantics, called negation as possible failure, for inferring negative information from an indefinite deductive database. Transitional constraints are expressed using action relations and it is shown that transitional constraints can be handled in definite deductive databases in the same way as static constraints if the underlying database is suitably extended. The concept of irnplicit update is introduced and the path finding method is extended to compute facts which are present in action relations. The extended method is capable of checking integrity in definite deductive databases in the presence of transitional constraints. Combining different generalisations of the path finding method to check integrity in deductive databases in the presence of arbitrary constraints is discussed. An extension of the data manipulation language of SQL is proposed to express a wider range of integrity constraints. This class of constraints can be maintained in a database with the tools provided in this thesis

    Normal Multimodal Logics: Automatic Deduction and Logic Programming Extension

    Get PDF

    Modular Logic Programming: Full Compositionality and Conflict Handling for Practical Reasoning

    Get PDF
    With the recent development of a new ubiquitous nature of data and the profusity of available knowledge, there is nowadays the need to reason from multiple sources of often incomplete and uncertain knowledge. Our goal was to provide a way to combine declarative knowledge bases – represented as logic programming modules under the answer set semantics – as well as the individual results one already inferred from them, without having to recalculate the results for their composition and without having to explicitly know the original logic programming encodings that produced such results. This posed us many challenges such as how to deal with fundamental problems of modular frameworks for logic programming, namely how to define a general compositional semantics that allows us to compose unrestricted modules. Building upon existing logic programming approaches, we devised a framework capable of composing generic logic programming modules while preserving the crucial property of compositionality, which informally means that the combination of models of individual modules are the models of the union of modules. We are also still able to reason in the presence of knowledge containing incoherencies, which is informally characterised by a logic program that does not have an answer set due to cyclic dependencies of an atom from its default negation. In this thesis we also discuss how the same approach can be extended to deal with probabilistic knowledge in a modular and compositional way. We depart from the Modular Logic Programming approach in Oikarinen & Janhunen (2008); Janhunen et al. (2009) which achieved a restricted form of compositionality of answer set programming modules. We aim at generalising this framework of modular logic programming and start by lifting restrictive conditions that were originally imposed, and use alternative ways of combining these (so called by us) Generalised Modular Logic Programs. We then deal with conflicts arising in generalised modular logic programming and provide modular justifications and debugging for the generalised modular logic programming setting, where justification models answer the question: Why is a given interpretation indeed an Answer Set? and Debugging models answer the question: Why is a given interpretation not an Answer Set? In summary, our research deals with the problematic of formally devising a generic modular logic programming framework, providing: operators for combining arbitrary modular logic programs together with a compositional semantics; We characterise conflicts that occur when composing access control policies, which are generalisable to our context of generalised modular logic programming, and ways of dealing with them syntactically: provided a unification for justification and debugging of logic programs; and semantically: provide a new semantics capable of dealing with incoherences. We also provide an extension of modular logic programming to a probabilistic setting. These goals are already covered with published work. A prototypical tool implementing the unification of justifications and debugging is available for download from http://cptkirk.sourceforge.net

    Implementation of Web Query Languages Reconsidered

    Get PDF
    Visions of the next generation Web such as the "Semantic Web" or the "Web 2.0" have triggered the emergence of a multitude of data formats. These formats have different characteristics as far as the shape of data is concerned (for example tree- vs. graph-shaped). They are accompanied by a puzzlingly large number of query languages each limited to one data format. Thus, a key feature of the Web, namely to make it possible to access anything published by anyone, is compromised. This thesis is devoted to versatile query languages capable of accessing data in a variety of Web formats. The issue is addressed from three angles: language design, common, yet uniform semantics, and common, yet uniform evaluation. % Thus it is divided in three parts: First, we consider the query language Xcerpt as an example of the advocated class of versatile Web query languages. Using this concrete exemplar allows us to clarify and discuss the vision of versatility in detail. Second, a number of query languages, XPath, XQuery, SPARQL, and Xcerpt, are translated into a common intermediary language, CIQLog. This language has a purely logical semantics, which makes it easily amenable to optimizations. As a side effect, this provides the, to the best of our knowledge, first logical semantics for XQuery and SPARQL. It is a very useful tool for understanding the commonalities and differences of the considered languages. Third, the intermediate logical language is translated into a query algebra, CIQCAG. The core feature of CIQCAG is that it scales from tree- to graph-shaped data and queries without efficiency losses when tree-data and -queries are considered: it is shown that, in these cases, optimal complexities are achieved. CIQCAG is also shown to evaluate each of the aforementioned query languages with a complexity at least as good as the best known evaluation methods so far. For example, navigational XPath is evaluated with space complexity O(q d) and time complexity O(q n) where q is the query size, n the data size, and d the depth of the (tree-shaped) data. CIQCAG is further shown to provide linear time and space evaluation of tree-shaped queries for a larger class of graph-shaped data than any method previously proposed. This larger class of graph-shaped data, called continuous-image graphs, short CIGs, is introduced for the first time in this thesis. A (directed) graph is a CIG if its nodes can be totally ordered in such a manner that, for this order, the children of any node form a continuous interval. CIQCAG achieves these properties by employing a novel data structure, called sequence map, that allows an efficient evaluation of tree-shaped queries, or of tree-shaped cores of graph-shaped queries on any graph-shaped data. While being ideally suited to trees and CIGs, the data structure gracefully degrades to unrestricted graphs. It yields a remarkably efficient evaluation on graph-shaped data that only a few edges prevent from being trees or CIGs
    corecore