157 research outputs found
Recommended from our members
Justification Patterns for OWL DL Ontologies
For debugging OWL-DL ontologies, natural language explanations of inconsistencies and undesirable entailments are of great help. From such explanations, ontology developers can learn why an ontology gives rise to specific entailments. Unfortunately, commonly used tableaux-based reasoning services do not provide a basis for such explanations, since they rely on a refutation proof strategy and normalising transformations that are difficult for human ontology editors to understand. For this reason, we investigate the use of automatically generated justifications for entailments (i.e., minimal sets of axioms from the ontology that cause entailments to hold). We show that such justifications fall into a manageable number of patterns, which can be used as a basis for generating natural language explanations by associating each justification pattern with a rhetorical pattern in natural language
Recommended from our members
Measuring the understandability of deduction rules for OWL
Debugging OWL ontologies can be aided with automated reasoners that generate entailments, including undesirable ones. This information is, however, only useful if developers understand why the entailments hold. To support domain experts (with limited knowledge of OWL), we are developing a system that explains, in English, why an entailment follows from an ontology. In planning such explanations, our system
starts from a justification of the entailment and constructs a proof tree including intermediate statements that link the justification to the entailment. Proof trees are constructed from a set of intuitively plausible deduction rules. We here report on a study in which we collected empirical frequency data on the understandability of the deduction rules, resulting in a facility index for each rule. This measure forms the basis for making a principled choice among alternative explanations, and identifying steps in the explanation that are likely to require extra elucidation
Explanation for defeasible entailment
Explanation facilities are an essential part of tools for knowledge representation and reasoning systems. Knowledge representation and reasoning systems allow users to capture information about the world and reason about it. They are useful in understanding entailments which allow users to derive implicit knowledge that can be made explicit through inferences. Additionally, explanations also assist users in debugging and repairing knowledge bases when conflicts arise. Understanding the conclusions drawn from logic-based systems are complex and requires expert knowledge, especially when defeasible knowledge bases are taken into account for both expert and general users. A defeasible knowledge base represents statements that can be retracted because they refer to information in which there are exceptions to stated rules. That is, any defeasible statement is one that may be withdrawn upon learning of an exception. Explanations for classical logics such as description logics which are well-known formalisms for reasoning about information in a given domain are provided through the notion of justifications. Simply providing or listing the statements that are responsible for an entailment in the classical case is enough to justify an entailment. However, when looking at the defeasible case where entailed statements can be retracted, this is not adequate because the way in which entailment is performed is more complicated than the classical case. In this dissertation, we combine explanations with a particular approach to dealing with defeasible reasoning. We provide an algorithm to compute justification-based explanations for defeasible knowledge bases. It is shown that in order to accurately derive justifications for defeasible knowledge bases, we need to establish the point at which conflicts arise by using an algorithm to come up with a ranking of defeasible statements. This means that only a portion of the knowledge is considered because the statements that cause conflicts are discarded. The final algorithm consists of two parts; the first part establishes the point at which the conflicts occur and the second part uses the information obtained from the first algorithm to compute justifications for defeasible knowledge bases
- …