251 research outputs found
Rewritability in Monadic Disjunctive Datalog, MMSNP, and Expressive Description Logics
We study rewritability of monadic disjunctive Datalog programs, (the
complements of) MMSNP sentences, and ontology-mediated queries (OMQs) based on
expressive description logics of the ALC family and on conjunctive queries. We
show that rewritability into FO and into monadic Datalog (MDLog) are decidable,
and that rewritability into Datalog is decidable when the original query
satisfies a certain condition related to equality. We establish
2NExpTime-completeness for all studied problems except rewritability into MDLog
for which there remains a gap between 2NExpTime and 3ExpTime. We also analyze
the shape of rewritings, which in the MMSNP case correspond to obstructions,
and give a new construction of canonical Datalog programs that is more
elementary than existing ones and also applies to formulas with free variables
Dichotomies in Ontology-Mediated Querying with the Guarded Fragment
We study the complexity of ontology-mediated querying when ontologies are
formulated in the guarded fragment of first-order logic (GF). Our general aim
is to classify the data complexity on the level of ontologies where query
evaluation w.r.t. an ontology O is considered to be in PTime if all (unions of
conjunctive) queries can be evaluated in PTime w.r.t. O and coNP-hard if at
least one query is coNP-hard w.r.t. O. We identify several large and relevant
fragments of GF that enjoy a dichotomy between PTime and coNP, some of them
additionally admitting a form of counting. In fact, almost all ontologies in
the BioPortal repository fall into these fragments or can easily be rewritten
to do so. We then establish a variation of Ladner's Theorem on the existence of
NP-intermediate problems and use this result to show that for other fragments,
there is provably no such dichotomy. Again for other fragments (such as full
GF), establishing a dichotomy implies the Feder-Vardi conjecture on the
complexity of constraint satisfaction problems. We also link these results to
Datalog-rewritability and study the decidability of whether a given ontology
enjoys PTime query evaluation, presenting both positive and negative results
Axiom Pinpointing
Axiom pinpointing refers to the task of finding the specific axioms in an
ontology which are responsible for a consequence to follow. This task has been
studied, under different names, in many research areas, leading to a
reformulation and reinvention of techniques. In this work, we present a general
overview to axiom pinpointing, providing the basic notions, different
approaches for solving it, and some variations and applications which have been
considered in the literature. This should serve as a starting point for
researchers interested in related problems, with an ample bibliography for
delving deeper into the details
Approximate Assertional Reasoning Over Expressive Ontologies
In this thesis, approximate reasoning methods for scalable assertional reasoning are provided whose computational properties can be established in a well-understood way, namely in terms of soundness and completeness, and whose quality can be analyzed in terms of statistical measurements, namely recall and precision. The basic idea of these approximate reasoning methods is to speed up reasoning by trading off the quality of reasoning results against increased speed
When is Ontology-Mediated Querying Efficient?
In ontology-mediated querying, description logic (DL) ontologies are used to
enrich incomplete data with domain knowledge which results in more complete
answers to queries. However, the evaluation of ontology-mediated queries (OMQs)
over relational databases is computationally hard. This raises the question
when OMQ evaluation is efficient, in the sense of being tractable in combined
complexity or fixed-parameter tractable. We study this question for a range of
ontology-mediated query languages based on several important and widely-used
DLs, using unions of conjunctive queries as the actual queries. For the DL ELHI
extended with the bottom concept, we provide a characterization of the classes
of OMQs that are fixed-parameter tractable. For its fragment EL extended with
domain and range restrictions and the bottom concept (which restricts the use
of inverse roles), we provide a characterization of the classes of OMQs that
are tractable in combined complexity. Both results are in terms of equivalence
to OMQs of bounded tree width and rest on a reasonable assumption from
parameterized complexity theory. They are similar in spirit to Grohe's seminal
characterization of the tractable classes of conjunctive queries over
relational databases. We further study the complexity of the meta problem of
deciding whether a given OMQ is equivalent to an OMQ of bounded tree width,
providing several completeness results that range from NP to 2ExpTime,
depending on the DL used. We also consider the DL-Lite family of DLs, including
members that admit functional roles
Query Answering in Probabilistic Data and Knowledge Bases
Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory
Pseudo-contractions as Gentle Repairs
Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
- …