220 research outputs found

    On the Satisfiability of Quasi-Classical Description Logics

    Get PDF
    Though quasi-classical description logic (QCDL) can tolerate the inconsistency of description logic in reasoning, a knowledge base in QCDL possibly has no model. In this paper, we investigate the satisfiability of QCDL, namely, QC-coherency and QC-consistency and develop a tableau calculus, as a formal proof, to determine whether a knowledge base in QCDL is QC-consistent. To do so, we repair the standard tableau for DL by introducing several new expansion rules and defining a new closeness condition. Finally, we prove that this calculus is sound and complete. Based on this calculus, we implement an OWL paraconsistent reasoner called QC-OWL. Preliminary experiments show that QC-OWL is highly efficient in checking QC-consistency

    Handling Inconsistency in Knowledge Bases

    Get PDF
    Real-world automated reasoning systems, based on classical logic, face logically inconsistent information, and they must cope with it. It is onerous to develop such systems because classical logic is explosive. Recently, progress has been made towards semantics that deal with logical inconsistency. However, such semantics was never analyzed in the aspect of inconsistency tolerant relational model. In our research work, we use an inconsistency and incompleteness tolerant relational model called Paraconsistent Relational Model. The paraconsistent relational model is an extension of the ordinary relational model that can store, not only positive information but also negative information. Therefore, a piece of information in the paraconsistent relational model has four truth values: true, false, both, and unknown. However, the paraconsistent relational model cannot represent disjunctive information (disjunctive tuples). We then introduce an extended paraconsistent relational model called disjunctive paraconsistent relational model. By using both the models, we handle inconsistency - similar to the notion of quasi-classic logic or four-valued logic -- in deductive databases (logic programs with no functional symbols). In addition to handling inconsistencies in extended databases, we also apply inconsistent tolerant reasoning technique in semantic web knowledge bases. Specifically, we handle inconsistency assosciated with closed predicates in semantic web. We use again the paraconsistent approach to handle inconsistency. We further extend the same idea to description logic programs (combination of semantic web and logic programs) and introduce dl-relation to represent inconsistency associated with description logic programs

    Knowledge base ontological debugging guided by linguistic evidence

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.When they grow in size, knowledge bases (KBs) tend to include sets of axioms which are intuitively absurd but nonetheless logically consistent. This is particularly true of data expressed in OWL, as part of the Semantic Web framework, which favors the aggregation of set of statements from multiple sources of knowledge, with overlapping signatures.Identifying nonsense is essential if one wants to avoid undesired inferences, but the sparse usage of negation within these datasets generally prevents the detection of such cases on a strict logical basis. And even if the KB is inconsistent, identifying the axioms responsible for the nonsense remains a non trivial task. This thesis investigates the usage of automatically gathered linguistic evidence in order to detect and repair violations of common sense within such datasets. The main intuition consists in exploiting distributional similarity between named individuals of an input KB, in order to identify consequences which are unlikely to hold if the rest of the KB does. Then the repair phase consists in selecting axioms to be preferably discarded (or at least amended) in order to get rid of the nonsense. A second strategy is also presented, which consists in strengthening the input KB with a foundational ontology, in order to obtain an inconsistency, before performing a form of knowledge base debugging/revision which incorporates this linguistic input. This last step may also be applied directly to an inconsistent input KB. These propositions are evaluated with different sets of statements issued from the Linked Open Data cloud, as well as datasets of a higher quality, but which were automatically degraded for the evaluation. The results seem to indicate that distributional evidence may actually constitute a relevant common ground for deciding between conflicting axioms

    Knowledge base ontological debugging guided by linguistic evidence

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.When they grow in size, knowledge bases (KBs) tend to include sets of axioms which are intuitively absurd but nonetheless logically consistent. This is particularly true of data expressed in OWL, as part of the Semantic Web framework, which favors the aggregation of set of statements from multiple sources of knowledge, with overlapping signatures.Identifying nonsense is essential if one wants to avoid undesired inferences, but the sparse usage of negation within these datasets generally prevents the detection of such cases on a strict logical basis. And even if the KB is inconsistent, identifying the axioms responsible for the nonsense remains a non trivial task. This thesis investigates the usage of automatically gathered linguistic evidence in order to detect and repair violations of common sense within such datasets. The main intuition consists in exploiting distributional similarity between named individuals of an input KB, in order to identify consequences which are unlikely to hold if the rest of the KB does. Then the repair phase consists in selecting axioms to be preferably discarded (or at least amended) in order to get rid of the nonsense. A second strategy is also presented, which consists in strengthening the input KB with a foundational ontology, in order to obtain an inconsistency, before performing a form of knowledge base debugging/revision which incorporates this linguistic input. This last step may also be applied directly to an inconsistent input KB. These propositions are evaluated with different sets of statements issued from the Linked Open Data cloud, as well as datasets of a higher quality, but which were automatically degraded for the evaluation. The results seem to indicate that distributional evidence may actually constitute a relevant common ground for deciding between conflicting axioms

    A survey of large-scale reasoning on the Web of data

    Get PDF
    As more and more data is being generated by sensor networks, social media and organizations, the Webinterlinking this wealth of information becomes more complex. This is particularly true for the so-calledWeb of Data, in which data is semantically enriched and interlinked using ontologies. In this large anduncoordinated environment, reasoning can be used to check the consistency of the data and of asso-ciated ontologies, or to infer logical consequences which, in turn, can be used to obtain new insightsfrom the data. However, reasoning approaches need to be scalable in order to enable reasoning over theentire Web of Data. To address this problem, several high-performance reasoning systems, whichmainly implement distributed or parallel algorithms, have been proposed in the last few years. Thesesystems differ significantly; for instance in terms of reasoning expressivity, computational propertiessuch as completeness, or reasoning objectives. In order to provide afirst complete overview of thefield,this paper reports a systematic review of such scalable reasoning approaches over various ontologicallanguages, reporting details about the methods and over the conducted experiments. We highlight theshortcomings of these approaches and discuss some of the open problems related to performing scalablereasoning

    Inconsistent ontology diagnosis: framework and prototype

    Get PDF
    Deliverable D3.6.1(WP3.6)In this document, we present a framework for inconsistent ontology diagnosis and repair by defining a number of new non-standard reasoning services to explain inconsistencies through pinpointing. We developed two different types of algorithms for the framework, and we describe these algorithms in some detail. Both algorithms have been prototypically implemented as the DION (Debugger of Inconsistent ONtologies) and MUPStersystem. The first implements a bottom-up approach to calculate pinpoints by the support of an external DL reasoner, the second using a specialised tableau-based calculus

    Axiom Pinpointing

    Full text link
    Axiom pinpointing refers to the task of finding the specific axioms in an ontology which are responsible for a consequence to follow. This task has been studied, under different names, in many research areas, leading to a reformulation and reinvention of techniques. In this work, we present a general overview to axiom pinpointing, providing the basic notions, different approaches for solving it, and some variations and applications which have been considered in the literature. This should serve as a starting point for researchers interested in related problems, with an ample bibliography for delving deeper into the details
    • …
    corecore