11 research outputs found

    Horn rewritability vs PTime query evaluation for description logic TBoxes

    Get PDF
    We study the following question: if Ï„ is a TBox that is formulated in an expressive DL L and all CQs can be evaluated in PTime w.r.t. Ï„, can Ï„ be replaced by a TBox Ï„' that is formulated in the Horn-fragment of L and such that for all CQs and ABoxes, the answers w.r.t. Ï„ and Ï„' coincide? Our main results are that this is indeed the case when L is the set of ALCHI or ALCIF TBoxes of quantifier depth 1 (which covers the majority of such TBoxes), but not for ALCHIF and ALCQ TBoxes of depth 1

    THE DATA COMPLEXITY OF DESCRIPTION LOGIC ONTOLOGIES

    Get PDF
    We analyze the data complexity of ontology-mediated querying where the ontologies are formulated in a description logic (DL) of the ALC family and queries are conjunctive queries, positive existential queries, or acyclic conjunctive queries. Our approach is non-uniform in the sense that we aim to understand the complexity of each single ontology instead of for all ontologies formulated in a certain language. While doing so, we quantify over the queries and are interested, for example, in the question whether all queries can be evaluated in polynomial time w.r.t. a given ontology. Our results include a PTime/coNP-dichotomy for ontologies of depth one in the description logic ALCFI, the same dichotomy for ALC- and ALCI-ontologies of unrestricted depth, and the non-existence of such a dichotomy for ALCF-ontologies. For the latter DL, we additionally show that it is undecidable whether a given ontology admits PTime query evaluation. We also consider the connection between PTime query evaluation and rewritability into (monadic) Datalog

    Guarded Ontology-Mediated Queries

    Get PDF

    Saturation-based Boolean conjunctive query answering and rewriting for the guarded quantification fragments

    Full text link
    Query answering is an important problem in AI, database and knowledge representation. In this paper, we develop saturation-based Boolean conjunctive query answering and rewriting procedures for the guarded, the loosely guarded and the clique-guarded fragments. Our query answering procedure improves existing resolution-based decision procedures for the guarded and the loosely guarded fragments and this procedure solves Boolean conjunctive query answering problems for the guarded, the loosely guarded and the clique-guarded fragments. Based on this query answering procedure, we also introduce a novel saturation-based query rewriting procedure for these guarded fragments. Unlike mainstream query answering and rewriting methods, our procedures derive a compact and reusable saturation, namely a closure of formulas, to handle the challenge of querying for distributed datasets. This paper lays the theoretical foundations for the first automated deduction decision procedures for Boolean conjunctive query answering and the first saturation-based Boolean conjunctive query rewriting in the guarded, the loosely guarded and the clique-guarded fragments

    Federated knowledge base debugging in DL-Lite A

    Full text link
    Due to the continuously growing amount of data the federation of different and distributed data sources gained increasing attention. In order to tackle the challenge of federating heterogeneous sources a variety of approaches has been proposed. Especially in the context of the Semantic Web the application of Description Logics is one of the preferred methods to model federated knowledge based on a well-defined syntax and semantics. However, the more data are available from heterogeneous sources, the higher the risk is of inconsistency – a serious obstacle for performing reasoning tasks and query answering over a federated knowledge base. Given a single knowledge base the process of knowledge base debugging comprising the identification and resolution of conflicting statements have been widely studied while the consideration of federated settings integrating a network of loosely coupled data sources (such as LOD sources) has mostly been neglected. In this thesis we tackle the challenging problem of debugging federated knowledge bases and focus on a lightweight Description Logic language, called DL-LiteA, that is aimed at applications requiring efficient and scalable reasoning. After introducing formal foundations such as Description Logics and Semantic Web technologies we clarify the motivating context of this work and discuss the general problem of information integration based on Description Logics. The main part of this thesis is subdivided into three subjects. First, we discuss the specific characteristics of federated knowledge bases and provide an appropriate approach for detecting and explaining contradictive statements in a federated DL-LiteA knowledge base. Second, we study the representation of the identified conflicts and their relationships as a conflict graph and propose an approach for repair generation based on majority voting and statistical evidences. Third, in order to provide an alternative way for handling inconsistency in federated DL-LiteA knowledge bases we propose an automated approach for assessing adequate trust values (i.e., probabilities) at different levels of granularity by leveraging probabilistic inference over a graphical model. In the last part of this thesis, we evaluate the previously developed algorithms against a set of large distributed LOD sources. In the course of discussing the experimental results, it turns out that the proposed approaches are sufficient, efficient and scalable with respect to real-world scenarios. Moreover, due to the exploitation of the federated structure in our algorithms it further becomes apparent that the number of identified wrong statements, the quality of the generated repair as well as the fineness of the assessed trust values profit from an increasing number of integrated sources

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Abstraction in ontology-based data management

    Get PDF
    In many aspects of our society there is growing awareness and consent on the need for data-driven approaches that are resilient, transparent, and fully accountable. But in order to fulfil the promises and benefits of a data-driven society, it is necessary that the data services exposed by the organisations' information systems are well-documented, and their semantics is clearly specified. Effectively documenting data services is indeed a crucial issue for organisations, not only for governing their own data, but also for interoperation purposes. In this thesis, we propose a new approach to automatically associate formal semantic descriptions to data services, thus bringing them into compliance with the FAIR guiding principles, i.e., make data services automatically Findable, Accessible, Interoperable, and Reusable (FAIR). We base our proposal on the Ontology-based Data Management (OBDM) paradigm, where a domain ontology is used to provide a semantic layer mapped to the data sources of an organisation, thus abstracting from the technical details of the data layer implementation. The basic idea is to characterise or explain the semantics of a given data service expressed as query over the source schema in terms of a query over the ontology. Thus, the query over the ontology represents an abstraction of the given data service in terms of the domain ontology through the mapping, and, together with the elements in the vocabulary of the ontology, such abstraction forms a basis for annotating the given data service with suitable metadata expressing its semantics. We illustrate a formal framework for the task of automatically produce a semantic characterisation of a given data service expressed as a query over the source schema. The framework is based on three semantically well-founded notions, namely perfect, sound, and complete source-to-ontology rewriting, and on two associated basic computational problems, namely verification and computation. The former verifies whether a given query over the ontology is a perfect (respectively, sound, complete) source-to-ontology rewriting of a given data service expressed as a query over the source schema, whereas the latter computes one such rewriting, provided it exists. We provide an in-depth complexity analysis of these two computational problems in a very general scenario which uses languages amongst the most popular considered in the literature of managing data through an ontology. Furthermore, since we study also cases where the target query language for expressing source-to-ontology rewritings allows inequality atoms, we also investigate the problem of answering queries with inequalities over lightweight ontologies, a problem that has been rarely addressed. In another direction, we study and advocate the use of a non-monotonic target query language for expressing source-to-ontology rewritings. Last but not least, we outline a detailed related work, which illustrates how the results achieved in this thesis notably contributes to new results in the Semantic Web context, in the relational database theory, and in view-based query processing

    Horn-Rewritability vs PTime Query Evaluation in Ontology-Mediated Querying

    Get PDF
    In ontology-mediated querying with an expressive description logic L, two desirable properties of a TBox T are (1) being able to replace T with a TBox formulated in the Horn-fragment of L without affecting the answers to conjunctive queries, and (2) that every conjunctive query can be evaluated in PTime w.r.t. T. We investigate in which cases (1) and (2) are equivalent, finding that the answer depends on whether the unique name assumption (UNA) is made, on the description logic under consideration, and on the nesting depth of quantifiers in the TBox. We also clarify the relationship between query evaluation with and without UNA and consider natural variations of property (1)
    corecore