40 research outputs found

    Syntactic vs. Semantic Locality: How Good Is a Cheap Approximation?

    Full text link
    Extracting a subset of a given OWL ontology that captures all the ontology's knowledge about a specified set of terms is a well-understood task. This task can be based, for instance, on locality-based modules (LBMs). These come in two flavours, syntactic and semantic, and a syntactic LBM is known to contain the corresponding semantic LBM. For syntactic LBMs, polynomial extraction algorithms are known, implemented in the OWL API, and being used. In contrast, extracting semantic LBMs involves reasoning, which is intractable for OWL 2 DL, and these algorithms had not been implemented yet for expressive ontology languages. We present the first implementation of semantic LBMs and report on experiments that compare them with syntactic LBMs extracted from real-life ontologies. Our study reveals whether semantic LBMs are worth the additional extraction effort, compared with syntactic LBMs

    Deductive Module Extraction for Expressive Description Logics: Extended Version

    Get PDF
    In deductive module extraction, we determine a small subset of an ontology for a given vocabulary that preserves all logical entailments that can be expressed in that vocabulary. While in the literature stronger module notions have been discussed, we argue that for applications in ontology analysis and ontology reuse, deductive modules, which are decidable and potentially smaller, are often sufficient. We present methods based on uniform interpolation for extracting different variants of deductive modules, satisfying properties such as completeness, minimality and robustness under replacements, the latter being particularly relevant for ontology reuse. An evaluation of our implementation shows that the modules computed by our method are often significantly smaller than those computed by existing methods.This is an extended version of the article in the proceedings of IJCAI 2020

    Separating Data Examples by Description Logic Concepts with Restricted Signatures

    Get PDF
    We study the separation of positive and negative data examples in terms of description logic concepts in the presence of an ontology. In contrast to previous work, we add a signature that specifies a subset of the symbols that can be used for separation, and we admit individual names in that signature. We consider weak and strong versions of the resulting problem that differ in how the negative examples are treated and we distinguish between separation with and without helper symbols. Within this framework, we compare the separating power of different languages and investigate the complexity of deciding separability. While weak separability is shown to be closely related to conservative extensions, strongly separating concepts coincide with Craig interpolants, for suitably defined encodings of the data and ontology. This enables us to transfer known results from those fields to separability. Conversely, we obtain original results on separability that can be transferred backward. For example, rather surprisingly, conservative extensions and weak separability in ALCO are both 3ExpTime-complete

    Modularity Through Inseparability : Algorithms, Extensions, and Evaluation

    Get PDF
    Module extraction is the task of computing, given a description logic ontology and a signature ∑ of interest, a subset (called a module) such that for certain applications that only concern ∑ the ontology can be equivalently replaced by the module. In most applications of module extraction it is desirable to compute a module which is as small as possible, and where possible a minimal one. In logic-based approaches to module extraction the most popular way to define modules is using inseparability relations, the strongest and most robust notion of this being model ∑-inseparability, where two ontologies are called ∑-inseparable iff the ∑-reducts of their models coincide. Then, a ∑-module is defined as a ∑-inseparable subset of the ontology. Unfortunately deciding if a subset of an ontology is a minimal ∑-module, over ontologies formulated in even moderately expressive logics, is of perpetually high complexity and often undecidable, and for this reason approximation algorithms are required. Instead of computing a minimal ∑-module one computes some ∑-module and the main research task is to minimise the size of these modules --- to compute an approximation of a minimal ∑-module. This thesis considers research surrounding approximations based on the model ∑-inseparability relation including: improving and extending existing approximation algorithms, providing a highly-optimised implementations, and the introduction a new methodology to evaluate just how well approximations approximate minimal modules, all supported by a significant empirical investigation

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
    corecore