44 research outputs found

    Ontology-Based Data Access Using Rewriting, OWL 2 RL Systems and Repairing

    Full text link
    Abstract. In previous work it has been shown how an OWL 2 DL on-tology O can be `repaired ' for an OWL 2 RL system ans|that is, how we can compute a set of axioms R that is independent from the data and such that ans that is generally incomplete for O becomes complete for all SPARQL queries when used with O [ R. However, the initial implementation and experiments were very preliminary and hence it is currently unclear whether the approach can be applied to large and com-plex ontologies. Moreover, the approach so far can only support instance queries. In the current paper we thoroughly investigate repairing as an approach to scalable (and complete) ontology-based data access. First, we present several non-trivial optimisations to the rst prototype. Sec-ond, we show how (arbitrary) conjunctive queries can be supported by integrating well-known query rewriting techniques with OWL 2 RL sys-tems via repairing. Third, we perform an extensive experimental evalua-tion obtaining encouraging results. In more detail, our results show that we can compute repairs even for very large real-world ontologies in a rea-sonable amount of time, that the performance overhead introduced by repairing is negligible in small to medium sized ontologies and noticeable but manageable in large and complex one, and that the hybrid reasoning approach can very eciently compute the correct answers for real-world challenging scenarios.

    Completitud de los métodos de acceso a datos basado en ontologías: enfoques, propiedades y herramientas

    Get PDF
    Exploramos el acceso a datos basado en ontologías (OBDA). Una ontología brinda una visión conceptual de un repositorio de datos relacional. La instancia relacional está representada con una base de datos relacional mientras que el esquema de la base está representado en el lenguaje OWL2. OWL2 tiene tres perfiles que hacen ciertas concesiones en la capacidad expresiva para salvaguardar la eficiencia computacional de las operaciones pero comprometiendo la completitud del razonamiento. Estudiaremos cómo es posible extender tales capacidades de representación para mejorar la completitud de los métodos de OBDA.Eje: Innovación en Sistemas de Software.Red de Universidades con Carreras en Informática (RedUNCI

    Building high-quality merged ontologies from multiple sources with requirements customization

    Get PDF
    Ontologies are the prime way of organizing data in the Semantic Web. Often, it is necessary to combine several, independently developed ontologies to obtain a knowledge graph fully representing a domain of interest. Existing approaches scale rather poorly to the merging of multiple ontologies due to using a binary merge strategy. Thus, we aim to investigate the extent to which the n-ary strategy can solve the scalability problem. This thesis contributes to the following important aspects: 1. Our n-ary merge strategy takes as input a set of source ontologies and their mappings and generates a merged ontology. For efficient processing, rather than successively merging complete ontologies pairwise, we group related concepts across ontologies into partitions and merge first within and then across those partitions. 2. We take a step towards parameterizable merge methods. We have identified a set of Generic Merge Requirements (GMRs) that merged ontologies might be expected to meet. We have investigated and developed compatibilities of the GMRs by a graph-based method. 3. When multiple ontologies are merged, inconsistencies can occur due to different world views encoded in the source ontologies To this end, we propose a novel Subjective Logic-based method to handling the inconsistency occurring while merging ontologies. We apply this logic to rank and estimate the trustworthiness of conflicting axioms that cause inconsistencies within a merged ontology. 4. To assess the quality of the merged ontologies systematically, we provide a comprehensive set of criteria in an evaluation framework. The proposed criteria cover a variety of characteristics of each individual aspect of the merged ontology in structural, functional, and usability dimensions. 5. The final contribution of this research is the development of the CoMerger tool that implements all aforementioned aspects accessible via a unified interface

    Querying and Repairing Inconsistent Prioritized Knowledge Bases: Complexity Analysis and Links with Abstract Argumentation

    Get PDF
    In this paper, we explore the issue of inconsistency handling over prioritized knowledge bases (KBs), which consist of an ontology, a set of facts, and a priority relation between conflicting facts. In the database setting, a closely related scenario has been studied and led to the definition of three different notions of optimal repairs (global, Pareto, and completion) of a prioritized inconsistent database. After transferring the notions of globally-, Pareto- and completion-optimal repairs to our setting, we study the data complexity of the core reasoning tasks: query entailment under inconsistency-tolerant semantics based upon optimal repairs, existence of a unique optimal repair, and enumeration of all optimal repairs. Our results provide a nearly complete picture of the data complexity of these tasks for ontologies formulated in common DL-Lite dialects. The second contribution of our work is to clarify the relationship between optimal repairs and different notions of extensions for (set-based) argumentation frameworks. Among our results, we show that Pareto-optimal repairs correspond precisely to stable extensions (and often also to preferred extensions), and we propose a novel semantics for prioritized KBs which is inspired by grounded extensions and enjoys favourable computational properties. Our study also yields some results of independent interest concerning preference-based argumentation frameworks.Comment: 27 pages. To appear in the 17th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020) without the appendi

    Debugging and repair of description logic ontologies.

    Get PDF
    Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2010.In logic-based Knowledge Representation and Reasoning (KRR), ontologies are used to represent knowledge about a particular domain of interest in a precise way. The building blocks of ontologies include concepts, relations and objects. Those can be combined to form logical sentences which explicitly describe the domain. With this explicit knowledge one can perform reasoning to derive knowledge that is implicit in the ontology. Description Logics (DLs) are a group of knowledge representation languages with such capabilities that are suitable to represent ontologies. The process of building ontologies has been greatly simpli ed with the advent of graphical ontology editors such as SWOOP, Prote ge and OntoStudio. The result of this is that there are a growing number of ontology engineers attempting to build and develop ontologies. It is frequently the case that errors are introduced while constructing the ontology resulting in undesirable pieces of implicit knowledge that follows from the ontology. As such there is a need to extend current ontology editors with tool support to aid these ontology engineers in correctly designing and debugging their ontologies. Errors such as unsatis able concepts and inconsistent ontologies frequently occur during ontology construction. Ontology Debugging and Repair is concerned with helping the ontology developer to eliminate these errors from the ontology. Much emphasis, in current tools, has been placed on giving explanations as to why these errors occur in the ontology. Less emphasis has been placed on using this information to suggest e cient ways to eliminate the errors. Furthermore, these tools focus mainly on the errors of unsatis able concepts and inconsistent ontologies. In this dissertation we ll an important gap in the area by contributing an alternative approach to ontology debugging and repair for the more general error of a list of unwanted sentences. Errors such as unsatis able concepts and inconsistent ontologies can be represented as unwanted sentences in the ontology. Our approach not only considers the explanation of the unwanted sentences but also the identi cation of repair strategies to eliminate these unwanted sentences from the ontology

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Minimal Definition Signatures: Computation and Application to Ontology Alignment

    Get PDF
    In computer science, ontologies define a domain to facilitate knowledge representation and sharing, in a machine processable way. Ontologies approximate an actual world representation, and thus ontologies will differ for many reasons. Therefore knowledge sharing, and in general semantic interoperability, is inherently hindered or even precluded between heterogenous ontologies. Ontology matching addresses this fundamental issue by producing alignments, i.e. sets of correspondences that describe relations between semantically related entities of different ontologies. However, alignments are typically incomplete. In order to support and improve ontology alignment, and semantic interoperability in general, this thesis exploits the notion of implicit definability. Implicit definability is a semantic property of ontologies, signatures, and concepts (and roles) stating that whenever the signature is fixed under a given ontology then the definition of a particular concept (or role) is also fixed. This thesis introduces the notion of minimal definition signature (MDS) from which a given entity is implicitly definable, and presents a novel approach that provides an efficient way to compute in practice all MDSs of the definable entities. Furthermore, it investigates the application of MDSs in the context of alignment generation, evaluation, and negotiation (whereby agents cooperatively establish a mutually acceptable alignment to support opportunistic communication within open environments). As implicit definability permits defined entities to be removed without semantic loss, this thesis argues, that if the meaning of the defined entity is wholly fixed by the terms of its definition, only the terms in the definition are required to be mapped in order to map the defined entity itself; thus implicit definability entails a new type of definability-based correspondence correspondence. Therefore this thesis defines and explores the properties of definability- based correspondences, and extends several ontology alignment evaluation metrics in order to accommodate their assessment. As task signature coverage is a prerequisite of many knowledge-based tasks (e.g. service invocation), a definability-based, efficient approximation approach to obtaining minimal signature cover sets is presented. Moreover, this thesis outlines a specific alignment negotiation approach and shows that by considering definability, agents are better equipped to: (i) determine whether an alignment provides the necessary coverage to achieve a particular task (align the whole ontology, formulate a message or query); (ii) adhere to privacy and confidentiality constraints; and (iii) minimalise the cardinality of the resulting mutual alignment

    Federated knowledge base debugging in DL-Lite A

    Full text link
    Due to the continuously growing amount of data the federation of different and distributed data sources gained increasing attention. In order to tackle the challenge of federating heterogeneous sources a variety of approaches has been proposed. Especially in the context of the Semantic Web the application of Description Logics is one of the preferred methods to model federated knowledge based on a well-defined syntax and semantics. However, the more data are available from heterogeneous sources, the higher the risk is of inconsistency – a serious obstacle for performing reasoning tasks and query answering over a federated knowledge base. Given a single knowledge base the process of knowledge base debugging comprising the identification and resolution of conflicting statements have been widely studied while the consideration of federated settings integrating a network of loosely coupled data sources (such as LOD sources) has mostly been neglected. In this thesis we tackle the challenging problem of debugging federated knowledge bases and focus on a lightweight Description Logic language, called DL-LiteA, that is aimed at applications requiring efficient and scalable reasoning. After introducing formal foundations such as Description Logics and Semantic Web technologies we clarify the motivating context of this work and discuss the general problem of information integration based on Description Logics. The main part of this thesis is subdivided into three subjects. First, we discuss the specific characteristics of federated knowledge bases and provide an appropriate approach for detecting and explaining contradictive statements in a federated DL-LiteA knowledge base. Second, we study the representation of the identified conflicts and their relationships as a conflict graph and propose an approach for repair generation based on majority voting and statistical evidences. Third, in order to provide an alternative way for handling inconsistency in federated DL-LiteA knowledge bases we propose an automated approach for assessing adequate trust values (i.e., probabilities) at different levels of granularity by leveraging probabilistic inference over a graphical model. In the last part of this thesis, we evaluate the previously developed algorithms against a set of large distributed LOD sources. In the course of discussing the experimental results, it turns out that the proposed approaches are sufficient, efficient and scalable with respect to real-world scenarios. Moreover, due to the exploitation of the federated structure in our algorithms it further becomes apparent that the number of identified wrong statements, the quality of the generated repair as well as the fineness of the assessed trust values profit from an increasing number of integrated sources

    An Ontology-Driven Sociomedical Web 3.0 Framework

    Get PDF
    Web 3.0, the web of social and semantic cooperation, calls for a methodological multidisciplinary architecture in order to reach its mainstream objectives. With the lack of such an architecture and the reliance of existing efforts on lightweight semantics and RDF graphs, this thesis proposes "Web3.OWL", an ontology-driven framework towards a Web 3.0 knowledge architecture. Meanwhile, the online social parenting data and their corresponding websites users known as "mommy bloggers" undergo one of the fastest online demographics growth, and the available literature reflects the very little attention this growth has so far been given and the various deficiencies the parenting domain suffers from; these deficiencies all fall under the umbrella of the scarcity of parenting sociomedical analysis and decision-support systems. The Web3.OWL framework puts forward an approach that relies on the Meta-Object Facility for Semantics standard (SMOF) for the management of its modeled OWL (Web Ontology Language) expressive domain ontologies on the one hand, and the coordination of its various underlined Web 3.0 prerequisite disciplines on the other. Setting off with a holistic portrayal of Web3.OWL’s components and workflow, the thesis progresses into a more analytic exploration of its main paradigms. Out of its different ontology-aware paradigms are notably highlighted both its methodology for expressiveness handling through modularization and projection techniques and algorithms, and its facilities for tagging inference, suggestion and processing. Web3.OWL, albeit generic by conception, proves its efficiency in solving the deficiencies and meeting the requirements of the sociomedical domain of interest. Its conceived ontology for parenting analysis and surveillance, baptised "ParOnt", strongly contributes to the backbone metamodel and the various constituents of this ontology-driven framework. Accordingly, as the workflow revolves around Description Logics principles, OWL 2 profiles along with standard and beyond-standard reasoning techniques, conducted experiments and competency questions are illustrated, thus establishing the required Web 3.0 outcomes. The empirical results of the diverse preliminary decision-support and recommendation services targeting parenting public awareness, orientation and education do ascertain, in conclusion, the value and potentials of the proposed conceptual framework
    corecore