8 research outputs found

    On the evolution of ontologies using probabilistic description logics

    Full text link
    Exceptions play an important role in conceptualizing data, especially when new knowledge is introduced or existing knowledge changes. Furthermore, real-world data often is contradictory and uncertain. Current formalisms for conceptualizing data like Description Logics rely upon first-order logic. As a consequence, they are poor in addressing exceptional, inconsistent and uncertain data, in particular when evolving the knowledge base over time. This paper investigates the use of Probabilistic Description Logics as a formalism for the evolution of ontologies that conceptualize real-world data. Different scenarios are presented for the automatic handling of inconsistencies during ontology evolution

    Structure preserving TBox repair using defaults

    Full text link
    Unsatisfiable concepts are a major cause for inconsistencies in Description Logics knowledge bases. Popular methods for repairing such concepts aim to remove or rewrite axioms to resolve the conflict by the original logics used. Under certain conditions, however, the structure and intention of the original axioms must be preserved in the knowledge base. This, in turn, requires changing the underlying logics for repair. In this paper, we show how Probabilistic Description Logics, a variant of Reiter’s default logics with Lehmann’s Lexicographical Entailment, can be used to resolve conflicts fully-automatically and receive a consistent knowledge base from which inferences can be drawn again

    Unsupervised conflict-free ontology evolution without removing axioms

    Full text link
    In the beginning of the Semantic Web, ontologies were usually constructed once by a single knowledge engineer and then used as a static conceptualization of some domain. Nowadays, knowledge bases are increasingly dynamically evolving and incorporate new knowledge from different heterogeneous domains -- some of which is even contributed by casual users (i.e., non-knowledge engineers) or even software agents. Given that ontologies are based on the rather strict formalism of Description Logics and their inference procedures, conflicts are likely to occur during ontology evolution. Conflicts, in turn, may cause an ontological knowledge base to become inconsistent and making reasoning impossible. Hence, every formalism for ontology evolution should provide a mechanism for resolving conflicts. In this paper we provide a general framework for conflict-free ontology evolution without changing the knowledge representation. Using a variant of Lehmann's Default Logics and Probabilistic Description Logics, we can invalidate unwanted implicit inferences without removing explicitly stated axioms. We show that this method outperforms classical ontology repair w.r.t. the amount of information lost while allowing for automatic conflict-solving when evolving ontologies

    Default logics for plausible reasoning with controversial axioms

    Full text link
    Using a variant of Lehmann's Default Logics and Probabilistic Description Logics we recently presented a framework that invalidates those unwanted inferences that cause concept unsatisfiability without the need to remove explicitly stated axioms. The solutions of this methods were shown to outperform classical ontology repair w.r.t. the number of inferences invalidated. However, conflicts may still exist in the knowledge base and can make reasoning ambiguous. Furthermore, solutions with a minimal number of inferences invalidated do not necessarily minimize the number of conflicts. In this paper we provide an overview over finding solutions that have a minimal number of conflicts while invalidating as few inferences as possible. Specifically, we propose to evaluate solutions w.r.t. the quantity of information they convey by recurring to the notion of entropy and discuss a possible approach towards computing the entropy w.r.t. an ABox

    TripleWave ::spreading RDF streams on the web

    No full text
    Processing data streams is increasingly gaining momentum, given the need to process these flows of information in real-time and at Web scale. In this context, RDF Stream Processing (RSP) and Stream Reasoning (SR) have emerged as solutions to combine semantic technologies with stream and event processing techniques. Research in these areas has proposed an ecosystem of solutions to query, reason and perform real-time processing over heterogeneous and distributed data streams on the Web. However, so far one basic building block has been missing: a mechanism to disseminate and exchange RDF streams on the Web. In this work we close this gap, proposing TripleWave, a reusable and generic tool that enables the publication of RDF streams on the Web. The features of TripleWave were selected based on requirements of real use-cases, and support a diverse set of scenarios, independent of any specific RSP implementation. TripleWave can be fed with existing Web streams (e.g. Twitter and Wikipedia streams) or time-annotated RDF datasets (e.g. the Linked Sensor Data dataset). It can be invoked through both pull- and push-based mechanisms, thus enabling RSP engines to automatically register and receive data from TripleWave
    corecore