17 research outputs found

    A Rule-Based Approach to Analyzing Database Schema Objects with Datalog

    Full text link
    Database schema elements such as tables, views, triggers and functions are typically defined with many interrelationships. In order to support database users in understanding a given schema, a rule-based approach for analyzing the respective dependencies is proposed using Datalog expressions. We show that many interesting properties of schema elements can be systematically determined this way. The expressiveness of the proposed analysis is exemplarily shown with the problem of computing induced functional dependencies for derived relations. The propagation of functional dependencies plays an important role in data integration and query optimization but represents an undecidable problem in general. And yet, our rule-based analysis covers all relational operators as well as linear recursive expressions in a systematic way showing the depth of analysis possible by our proposal. The analysis of functional dependencies is well-integrated in a uniform approach to analyzing dependencies between schema elements in general.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Transforming specifications of observable behaviour into programs

    Get PDF
    A methodology for deriving programs from specifications of observable behaviour is described. The class of processes to which this methodology is applicable includes those whose state changes are fully definable by labelled transition systems, for example communicating processes without internal state changes. A logic program representation of such labelled transition systems is proposed, interpreters based on path searching techniques are defined, and the use of partial evaluation techniques to derive the executable programs is described

    Query Evaluation in Deductive Databases

    Get PDF
    It is desirable to answer queries posed to deductive databases by computing fixpoints because such computations are directly amenable to set-oriented fact processing. However, the classical fixpoint procedures based on bottom-up processing — the naive and semi-naive methods — are rather primitive and often inefficient. In this article, we rely on bottom-up meta-interpretation for formalizing a new fixpoint procedure that performs a different kind of reasoning: We specify a top-down query answering method, which we call the Backward Fixpoint Procedure. Then, we reconsider query evaluation methods for recursive databases. First, we show that the methods based on rewriting on the one hand, and the methods based on resolution on the other hand, implement the Backward Fixpoint Procedure. Second, we interpret the rewritings of the Alexander and Magic Set methods as specializations of the Backward Fixpoint Procedure. Finally, we argue that such a rewriting is also needed in a database context for implementing efficiently the resolution-based methods. Thus, the methods based on rewriting and the methods based on resolution implement the same top-down evaluation of the original database rules by means of auxiliary rules processed bottom-up

    Whatever happened to meta-programming? (Invited talk)

    Get PDF

    Confluence of CHR Revisited:Invariants and Modulo Equivalence

    Get PDF
    Abstract simulation of one transition system by another is introduced as a means to simulate a potentially infinite class of similar transition sequences within a single transition sequence. This is useful for proving confluence under invariants of a given system, as it may reduce the number of proof cases to consider from infinity to a finite number. The classical confluence results for Constraint Handling Rules (CHR) can be explained in this way, using CHR as a simulation of itself. Using an abstract simulation based on a ground representation, we extend these results to include confluence under invariant and modulo equivalence, which have not been done in a satisfactory way before.Comment: Pre-proceedings paper presented at the 28th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2018), Frankfurt am Main, Germany, 4-6 September 2018 (arXiv:1808.03326

    Strong Completeness Results for Paraconsistent Logic Programming

    Get PDF
    In [6], we introduced a means of allowing logic programs to contain negations in both the head and the body of a clause. Such programs were called generally Horn programs (GHPs, for short). The model-theoretic semantics of GHPs were defined in terms of four-valued Belnap lattices [5]. For a class of programs called well-behaved programs, an SLD-resolution like proof procedure was introduced. This procedure was proven (under certain restrictions) to be sound (for existential queries) and complete (for ground queries). In this paper, we remove the restriction that programs be well-behaved and extend our soundness and completeness results to apply to arbitrary existential queries and to arbitrary GHPs. This is the strongest possible completeness result for GHPs. The results reported here apply to the design of very large knowledge bases and in processing queries to knowledge bases that possibly contain erroneous information

    The ss-semantics approach; theory and applications

    Get PDF
    AbstractThis paper is a general overview of an approach to the semantics of logic programs whose aim is to find notions of models which really capture the operational semantics, and are, therefore, useful for defining program equivalences and for semantics-based program analysis. The approach leads to the introduction of extended interpretations which are more expressive than Herbrand interpretations. The semantics in terms of extended interpretations can be obtained as a result of both an operational (top-down) and a fixpoint (bottom-up) construction. It can also be characterized from the model-theoretic viewpoint, by defining a set of extended models which contains standard Herbrand models. We discuss the original construction modeling computed answer substitutions, its compositional version, and various semantics modeling more concrete observables. We then show how the approach can be applied to several extensions of positive logic programs. We finally consider some applications, mainly in the area of semantics-based program transformation and analysis

    In Praise of Impredicativity: A Contribution to the Formalization of Meta-Programming

    Get PDF
    Processing programs as data is one of the successes of functional and logic programming. Higher-order functions, as program-processing programs are called in functional programming, and meta-programs, as they are called in logic programming, are widespread declarative programming techniques. In logic programming, there is a gap between the meta-programming practice and its theory: The formalizations of meta-programming do not explicitly address its impredicativity and are not fully adequate. This article aims at overcoming this unsatisfactory situation by discussing the relevance of impredicativity to meta-programming, by revisiting former formalizations of meta-programming, and by defining Reflective Predicate Logic, a conservative extension of first-order logic, which provides a simple formalization of meta-programming

    Common Metamodel of Component Diagram and Feature Diagram in Generative Programming

    Get PDF
    Component-based software engineering and generative programming are common approaches in software engineering. Each approach has some benefits and domain of usage. Component-based development is used to build autonomous components that can be further combined in different ways, while generative programming is more suitable when building systems that have different variants. Before a variable component based system can be build, it needs to be modeled. In this article, a new common metamodel that aims to enable modeling a system which combines both component-based development and generative programming is introduced. The introduced metamodel proposed in this paper combines the component diagram that is used to model systems in component-based development and the feature diagram that is employed in modeling systems in generative programming. The combined metamodel enables modeling of variable systems using components
    corecore