350 research outputs found
Experiences with Some Benchmarks for Deductive Databases and Implementations of Bottom-Up Evaluation
OpenRuleBench is a large benchmark suite for rule engines, which includes
deductive databases. We previously proposed a translation of Datalog to C++
based on a method that "pushes" derived tuples immediately to places where they
are used. In this paper, we report performance results of various
implementation variants of this method compared to XSB, YAP and DLV. We study
only a fraction of the OpenRuleBench problems, but we give a quite detailed
analysis of each such task and the factors which influence performance. The
results not only show the potential of our method and implementation approach,
but could be valuable for anybody implementing systems which should be able to
execute tasks of the discussed types.Comment: In Proceedings WLP'15/'16/WFLP'16, arXiv:1701.0014
An Inflationary Fixed Point Operator in XQuery
We introduce a controlled form of recursion in XQuery, inflationary fixed
points, familiar in the context of relational databases. This imposes
restrictions on the expressible types of recursion, but we show that
inflationary fixed points nevertheless are sufficiently versatile to capture a
wide range of interesting use cases, including the semantics of Regular XPath
and its core transitive closure construct.
While the optimization of general user-defined recursive functions in XQuery
appears elusive, we will describe how inflationary fixed points can be
efficiently evaluated, provided that the recursive XQuery expressions exhibit a
distributivity property. We show how distributivity can be assessed both,
syntactically and algebraically, and provide experimental evidence that XQuery
processors can substantially benefit during inflationary fixed point
evaluation.Comment: 11 pages, 10 figures, 2 table
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
Transformation of logic programs: Foundations and techniques
AbstractWe present an overview of some techniques which have been proposed for the transformation of logic programs. We consider the so-called “rules + strategies” approach, and we address the following two issues: the correctness of some basic transformation rules w.r.t. a given semantics and the use of strategies for guiding the application of the rules and improving efficiency. We will also show through some examples the use and the power of the transformational approach, and we will briefly illustrate its relationship to other methodologies for program development
Incorporating Stratified Negation into Query-Subquery Nets for Evaluating Queries to Stratified Deductive Databases
Most of the previously known evaluation methods for deductive databases are either breadth-first or depth-first (and recursive). There are cases when these strategies are not the best ones. It is desirable to have an evaluation framework for stratified DatalogN that is goal-driven, set-at-a-time (as opposed to tuple-at-a-time) and adjustable w.r.t. flow-of-control strategies. These properties are important for efficient query evaluation on large and complex deductive databases. In this paper, by incorporating stratified negation into so-called query-subquery nets, we develop an evaluation framework, called QSQNSTR, with such properties for evaluating queries to stratified DatalogN databases. A variety of flow-of-control strategies can be used for QSQNSTR. The generic evaluation method QSQNSTR for stratified DatalogN is sound, complete and has a PTIME data complexity
Recommended from our members
A unifying approach for queries and updates in deductive databases
This dissertation presents a unifying approach to process (recursive) queries and updates in a deductive database. To improve query performance, a combined top-down and bottom-up evaluation method is used to compile rules into iterative programs that contain relational algebra operators. This method is based on the lemma resolution that retains previous results to guarantee termination.Due to locality in database processing, it is desirable to materialize frequently used queries against views of the database. Unfortunately, if updates are allowed, maintaining materialized view tables becomes a major problem. We propose to materialize views incrementally, as queries are being answered. Hence views in our approach are only partially materialized. For such views, we design algorithms to perform updates only when the underlying view tables are actually affected.We compare our approach to two conventional methods for dealing with views: total materialization and query-modification. The first method materializes the entire view when it is defined while the second recomputes the view on the fly without maintaining any physical view tables. We demonstrate that our approach is a compromise between these two methods and performs better than either one in many situations.It is also desirable to be able to update views just like updating base tables. However, view updates are inherently ambiguous and the semantics of update propagation on recursively defined views were not well understood in the past. Using dynamic logic programming and lemma resolution, we are able to define the semantics of recursive view updates. These are expressed in the form of update translators specified by the database administrator when the view is defined. To guarantee completeness, we identify a subset of safe update translators. We prove that this subset of translators always terminate and are complete
Emerging trends proceedings of the 17th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2004
technical reportThis volume constitutes the proceedings of the Emerging Trends track of the 17th International Conference on Theorem Proving in Higher Order Logics (TPHOLs 2004) held September 14-17, 2004 in Park City, Utah, USA. The TPHOLs conference covers all aspects of theorem proving in higher order logics as well as related topics in theorem proving and verification. There were 42 papers submitted to TPHOLs 2004 in the full research cate- gory, each of which was refereed by at least 3 reviewers selected by the program committee. Of these submissions, 21 were accepted for presentation at the con- ference and publication in volume 3223 of Springer?s Lecture Notes in Computer Science series. In keeping with longstanding tradition, TPHOLs 2004 also offered a venue for the presentation of work in progress, where researchers invite discussion by means of a brief introductory talk and then discuss their work at a poster session. The work-in-progress papers are held in this volume, which is published as a 2004 technical report of the School of Computing at the University of Utah
Proof planning for logic program synthesis
The area of logic program synthesis is attracting increased interest. Most efforts
have concentrated on applying techniques from functional program synthesis to
logic program synthesis. This thesis investigates a new approach: Synthesizing
logic programs automatically via middle-out reasoning in proof planning.[Bundy et al 90a] suggested middle-out reasoning in proof planning. Middleout
reasoning uses variables to represent unknown details of a proof. Unifica¬
tion instantiates the variables in the subsequent planning, while proof planning
provides the necessary search control.Middle-out reasoning is used for synthesis by planning the verification of an
unknown logic program: The program body is represented with a meta-variable.
The planning results both in an instantiation of the program body and a plan for
the verification of that program. If the plan executes successfully, the synthesized
program is partially correct and complete.Middle-out reasoning is also used to select induction schemes. Finding an
appropriate induction scheme in synthesis is difficult, because the recursion in
the program, which is unknown at the outset, determines the induction in the
proof. In middle-out induction, we set up a schematic step case by representing
the constructors applied to the induction variables with meta-variables. Once
the step case is complete, the instantiated variables correspond to an induction
appropriate to the recursion of the program.The results reported in this thesis are encouraging. The approach has been
implemented as an extension to the proof planner CUM [Bundy et al 90c], called
Periwinkle, which has been used to synthesize a variety of programs fully automatically
- …