286 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
A Differential Datalog Interpreter
The core reasoning task for datalog engines is materialization, the
evaluation of a datalog program over a database alongside its physical
incorporation into the database itself. The de-facto method of computing it, is
through the recursive application of inference rules. Due to it being a costly
operation, it is a must for datalog engines to provide incremental
materialization, that is, to adjust the computation to new data, instead of
restarting from scratch. One of the major caveats, is that deleting data is
notoriously more involved than adding, since one has to take into account all
possible data that has been entailed from what is being deleted. Differential
Dataflow is a computational model that provides efficient incremental
maintenance, notoriously with equal performance between additions and
deletions, and work distribution, of iterative dataflows. In this paper we
investigate the performance of materialization with three reference datalog
implementations, out of which one is built on top of a lightweight relational
engine, and the two others are differential-dataflow and non-differential
versions of the same rewrite algorithm, with the same optimizations
Integrating Logic Rules with Everything Else, Seamlessly
This paper presents a language, Alda, that supports all of logic rules, sets,
functions, updates, and objects as seamlessly integrated built-ins. The key
idea is to support predicates in rules as set-valued variables that can be used
and updated in any scope, and support queries using rules as either explicit or
implicit automatic calls to an inference function.
We have defined a formal semantics of the language, implemented a prototype
compiler that builds on an object-oriented language that supports concurrent
and distributed programming and on an efficient logic rule system, and
successfully used the language and implementation on benchmarks and problems
from a wide variety of application domains. We describe the compilation method
and results of experimental evaluation.Comment: To be published in Theory and Practice of Logic Programming, Special
issue for selected papers from 39nd International Conference on Logic
Programming. arXiv admin note: substantial text overlap with arXiv:2205.1520
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Redacted by arXiv
Redacted by arXiv.Comment: This article has been removed by arXiv due a copyright claim by a 3rd
part
Meta-ontology fault detection
Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research.
One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications.
Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies.
A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging.
In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue.
Most of the work during my PhD has been spent in designing and implementing
an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions.
Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels.
In this thesis, I have provided both details of the challenges faced in this regard,
as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more
foundational results in the field
Preferential Query Answering in the Semantic Web with Possibilistic Networks
In this paper, we explore how ontological knowledge expressed via existential rules can be combined with possibilistic networks (i) to represent qualitative preferences along with domain knowledge, and (ii) to realize preference-based answering of conjunctive queries (CQs). We call these combinations ontological possibilistic networks (OP-nets). We define skyline and k-rank answers to CQs under preferences and provide complexity (including data tractability) results for deciding consistency and CQ skyline membership for OP-nets. We show that our formalism has a lower complexity than a similar existing formalism
Breaking the Negative Cycle: Exploring the Design Space of Stratification for First-Class Datalog Constraints
The ?_Dat calculus brings together the power of functional and declarative logic programming in one language. In ?_Dat, Datalog constraints are first-class values that can be constructed, passed around as arguments, returned, composed with other constraints, and solved.
A significant part of the expressive power of Datalog comes from the use of negation. Stratified negation is a particularly simple and practical form of negation accessible to ordinary programmers. Stratification requires that Datalog programs must not use recursion through negation.
For a Datalog program, this requirement is straightforward to check, but for a ?_Dat program, it is not so simple: A ?_Dat program constructs, composes, and solves Datalog programs at runtime. Hence stratification cannot readily be determined at compile-time.
In this paper, we explore the design space of stratification for ?_Dat. We investigate strategies to ensure, at compile-time, that programs constructed at runtime are guaranteed to be stratified, and we argue that previous design choices in the Flix programming language have been suboptimal
- …