1,055 research outputs found

    Belief Revision, Minimal Change and Relaxation: A General Framework based on Satisfaction Systems, and Applications to Description Logics

    Get PDF
    Belief revision of knowledge bases represented by a set of sentences in a given logic has been extensively studied but for specific logics, mainly propositional, and also recently Horn and description logics. Here, we propose to generalize this operation from a model-theoretic point of view, by defining revision in an abstract model theory known under the name of satisfaction systems. In this framework, we generalize to any satisfaction systems the characterization of the well known AGM postulates given by Katsuno and Mendelzon for propositional logic in terms of minimal change among interpretations. Moreover, we study how to define revision, satisfying the AGM postulates, from relaxation notions that have been first introduced in description logics to define dissimilarity measures between concepts, and the consequence of which is to relax the set of models of the old belief until it becomes consistent with the new pieces of knowledge. We show how the proposed general framework can be instantiated in different logics such as propositional, first-order, description and Horn logics. In particular for description logics, we introduce several concrete relaxation operators tailored for the description logic \ALC{} and its fragments \EL{} and \ELext{}, discuss their properties and provide some illustrative examples

    Repairing Ontologies via Axiom Weakening.

    Get PDF
    Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing the use of refinement operators. We introduce the theoretical framework for weakening DL ontologies, propose algorithms to repair ontologies based on the framework, and provide an analysis of the computational complexity. Through an empirical analysis made over real-life ontologies, we show that our approach preserves significantly more of the original knowledge of the ontology than removing axioms

    Towards a non monotonic description logics model

    Get PDF
    In order to deal with the Ontology Change problem and considering an environment where Description Logics (DLs) are used to describe ontologies, the question of how to integrate distributed ontologies appears to be in touch with Belief Revision since DL terminologies may define same concept descriptions of a not necessarily same world model. A possible alternative to reason about these concepts is to generate unique concept descriptions in a different terminology. This new terminology needs to be consistently created, trying to deal with the minimal change problem, and moreover, yielding a non-monotonic layer to express ontological knowledge in order to be further updated with new distributed ontologies.VII Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Ontological Analysis For Description Logics Knowledge Base Debugging

    Get PDF
    International audienceFormal ontology provides axiomatizations of domain independent principles which, among other applications,can be used to identify modeling errors within a knowledge base. The Ontoclean methodology is probably the best-known illustration of this strategy, but its cost in terms of manual work is often considered dissuasive. This article investigates the applicability of such debugging strategies to Description Logics knowledge bases, showing that even a partial and shallow analysis rapidly performed with a top-level ontology can reveal the presence of violations of common sense, and that the bottleneck, if there is one, may instead reside in the resolution of the resulting inconsistency or incoherence

    Datalog± Ontology Consolidation

    Get PDF
    Knowledge bases in the form of ontologies are receiving increasing attention as they allow to clearly represent both the available knowledge, which includes the knowledge in itself and the constraints imposed to it by the domain or the users. In particular, Datalog ± ontologies are attractive because of their property of decidability and the possibility of dealing with the massive amounts of data in real world environments; however, as it is the case with many other ontological languages, their application in collaborative environments often lead to inconsistency related issues. In this paper we introduce the notion of incoherence regarding Datalog± ontologies, in terms of satisfiability of sets of constraints, and show how under specific conditions incoherence leads to inconsistent Datalog ± ontologies. The main contribution of this work is a novel approach to restore both consistency and coherence in Datalog± ontologies. The proposed approach is based on kernel contraction and restoration is performed by the application of incision functions that select formulas to delete. Nevertheless, instead of working over minimal incoherent/inconsistent sets encountered in the ontologies, our operators produce incisions over non-minimal structures called clusters. We present a construction for consolidation operators, along with the properties expected to be satisfied by them. Finally, we establish the relation between the construction and the properties by means of a representation theorem. Although this proposal is presented for Datalog± ontologies consolidation, these operators can be applied to other types of ontological languages, such as Description Logics, making them apt to be used in collaborative environments like the Semantic Web.Fil: Deagustini, Cristhian Ariel David. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Martinez, Maria Vanina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Falappa, Marcelo Alejandro. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Simari, Guillermo Ricardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    Prioritized base Debugging in Description Logics

    Get PDF
    International audienceThe problem investigated is the identification within an input knowledge base of axioms which should be preferably discarded (or amended) in order to restore consistency, coherence, or get rid of undesired consequences. Most existing strategies for this task in Description Logics rely on conflicts, either computing all minimal conflicts beforehand, or generating conflicts on demand, using diagnosis. The article studies how prioritized base revision can be effectively applied in the former case. The first main contribution is the observation that for each axiom appearing in a minimal conflict, two bases can be obtained for a negligible cost, representing what part of the input knowledge must be preserved if this axiom is discarded or retained respectively, and which may serve as a basis to obtain a semantically motivated preference relation over these axioms. The second main contributions is an algorithm which, assuming this preference relation is known, selects some of the maximal consistent/coherent subsets of the input knowledge base accordingly, without the need to compute all of of them
    corecore