350 research outputs found

    Experiences with Some Benchmarks for Deductive Databases and Implementations of Bottom-Up Evaluation

    Full text link
    OpenRuleBench is a large benchmark suite for rule engines, which includes deductive databases. We previously proposed a translation of Datalog to C++ based on a method that "pushes" derived tuples immediately to places where they are used. In this paper, we report performance results of various implementation variants of this method compared to XSB, YAP and DLV. We study only a fraction of the OpenRuleBench problems, but we give a quite detailed analysis of each such task and the factors which influence performance. The results not only show the potential of our method and implementation approach, but could be valuable for anybody implementing systems which should be able to execute tasks of the discussed types.Comment: In Proceedings WLP'15/'16/WFLP'16, arXiv:1701.0014

    An Inflationary Fixed Point Operator in XQuery

    Full text link
    We introduce a controlled form of recursion in XQuery, inflationary fixed points, familiar in the context of relational databases. This imposes restrictions on the expressible types of recursion, but we show that inflationary fixed points nevertheless are sufficiently versatile to capture a wide range of interesting use cases, including the semantics of Regular XPath and its core transitive closure construct. While the optimization of general user-defined recursive functions in XQuery appears elusive, we will describe how inflationary fixed points can be efficiently evaluated, provided that the recursive XQuery expressions exhibit a distributivity property. We show how distributivity can be assessed both, syntactically and algebraically, and provide experimental evidence that XQuery processors can substantially benefit during inflationary fixed point evaluation.Comment: 11 pages, 10 figures, 2 table

    Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts published in a same volume. Part II is dedicated to the relation between logic and information system, within the scope of Kolmogorov algorithmic information theory. We present a recent application of Kolmogorov complexity: classification using compression, an idea with provocative implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses how Kolmogorov complexity, besides being a foundation to randomness, is also related to classification. Another approach to classification is also considered: the so-called "Google classification". It uses another original and attractive idea which is connected to the classification using compression and to Kolmogorov complexity from a conceptual point of view. We present and unify these different approaches to classification in terms of Bottom-Up versus Top-Down operational modes, of which we point the fundamental principles and the underlying duality. We look at the way these two dual modes are used in different approaches to information system, particularly the relational model for database introduced by Codd in the 70's. This allows to point out diverse forms of a fundamental duality. These operational modes are also reinterpreted in the context of the comprehension schema of axiomatic set theory ZF. This leads us to develop how Kolmogorov's complexity is linked to intensionality, abstraction, classification and information system.Comment: 43 page

    Transformation of logic programs: Foundations and techniques

    Get PDF
    AbstractWe present an overview of some techniques which have been proposed for the transformation of logic programs. We consider the so-called “rules + strategies” approach, and we address the following two issues: the correctness of some basic transformation rules w.r.t. a given semantics and the use of strategies for guiding the application of the rules and improving efficiency. We will also show through some examples the use and the power of the transformational approach, and we will briefly illustrate its relationship to other methodologies for program development

    Incorporating Stratified Negation into Query-Subquery Nets for Evaluating Queries to Stratified Deductive Databases

    Get PDF
    Most of the previously known evaluation methods for deductive databases are either breadth-first or depth-first (and recursive). There are cases when these strategies are not the best ones. It is desirable to have an evaluation framework for stratified DatalogN that is goal-driven, set-at-a-time (as opposed to tuple-at-a-time) and adjustable w.r.t. flow-of-control strategies. These properties are important for efficient query evaluation on large and complex deductive databases. In this paper, by incorporating stratified negation into so-called query-subquery nets, we develop an evaluation framework, called QSQNSTR, with such properties for evaluating queries to stratified DatalogN databases. A variety of flow-of-control strategies can be used for QSQNSTR. The generic evaluation method QSQNSTR for stratified DatalogN is sound, complete and has a PTIME data complexity

    Emerging trends proceedings of the 17th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2004

    Get PDF
    technical reportThis volume constitutes the proceedings of the Emerging Trends track of the 17th International Conference on Theorem Proving in Higher Order Logics (TPHOLs 2004) held September 14-17, 2004 in Park City, Utah, USA. The TPHOLs conference covers all aspects of theorem proving in higher order logics as well as related topics in theorem proving and verification. There were 42 papers submitted to TPHOLs 2004 in the full research cate- gory, each of which was refereed by at least 3 reviewers selected by the program committee. Of these submissions, 21 were accepted for presentation at the con- ference and publication in volume 3223 of Springer?s Lecture Notes in Computer Science series. In keeping with longstanding tradition, TPHOLs 2004 also offered a venue for the presentation of work in progress, where researchers invite discussion by means of a brief introductory talk and then discuss their work at a poster session. The work-in-progress papers are held in this volume, which is published as a 2004 technical report of the School of Computing at the University of Utah

    Proof planning for logic program synthesis

    Get PDF
    The area of logic program synthesis is attracting increased interest. Most efforts have concentrated on applying techniques from functional program synthesis to logic program synthesis. This thesis investigates a new approach: Synthesizing logic programs automatically via middle-out reasoning in proof planning.[Bundy et al 90a] suggested middle-out reasoning in proof planning. Middleout reasoning uses variables to represent unknown details of a proof. Unifica¬ tion instantiates the variables in the subsequent planning, while proof planning provides the necessary search control.Middle-out reasoning is used for synthesis by planning the verification of an unknown logic program: The program body is represented with a meta-variable. The planning results both in an instantiation of the program body and a plan for the verification of that program. If the plan executes successfully, the synthesized program is partially correct and complete.Middle-out reasoning is also used to select induction schemes. Finding an appropriate induction scheme in synthesis is difficult, because the recursion in the program, which is unknown at the outset, determines the induction in the proof. In middle-out induction, we set up a schematic step case by representing the constructors applied to the induction variables with meta-variables. Once the step case is complete, the instantiated variables correspond to an induction appropriate to the recursion of the program.The results reported in this thesis are encouraging. The approach has been implemented as an extension to the proof planner CUM [Bundy et al 90c], called Periwinkle, which has been used to synthesize a variety of programs fully automatically
    corecore