142 research outputs found
Recommended from our members
Generating Natural Language Explanations For Entailments In Ontologies
Building an error-free and high-quality ontology in OWL (Web Ontology Language)---the latest standard ontology language endorsed by the World Wide Web Consortium---is not an easy task for domain experts, who usually have limited knowledge of OWL and logic. One sign of an erroneous ontology is the occurrence of undesired inferences (or entailments), often caused by interactions among (apparently innocuous) axioms within the ontology. This suggests the need for a tool that allows developers to inspect why such an entailment follows from the ontology in order to debug and repair it.
This thesis aims to address the above problem by advancing knowledge and techniques in generating explanations for entailments in OWL ontologies. We build on earlier work on identifying minimal subsets of the ontology from which an entailment can be drawn---known technically as justifications. Our main focus is on planning (at a logical level) an explanation that links a justification (premises) to its entailment (conclusion); we also consider how best to express the explanation in English. Among other innovations, we propose a method for assessing the understandability of explanations, so that the easiest can be selected from a set of alternatives.
Our findings make a theoretical contribution to Natural Language Generation and Knowledge Representation. They could also play a practical role in improving the explanation facilities in ontology development tools, considering especially the requirements of users who are not expert in OWL
Recommended from our members
Planning accessible explanations for entailments in OWL ontologies
A useful enhancement of an NLG system for verbalising ontologies would be a module capable of explaining undesired entailments of the axioms encoded by the developer. This task raises interesting issues of content planning. One approach, useful as a baseline, is simply to list the subset of axioms relevant to inferring the entailment; however, in many cases it will still not be obvious, even to OWL experts, why the entailment follows. We suggest an approach in which further statements are added in order to construct a proof tree, with every step based on a relatively simple deduction rule of known difficulty; we also describe an empirical study through which the difficulty of these simple deduction patterns has been measured
Recommended from our members
Measuring the understandability of deduction rules for OWL
Debugging OWL ontologies can be aided with automated reasoners that generate entailments, including undesirable ones. This information is, however, only useful if developers understand why the entailments hold. To support domain experts (with limited knowledge of OWL), we are developing a system that explains, in English, why an entailment follows from an ontology. In planning such explanations, our system
starts from a justification of the entailment and constructs a proof tree including intermediate statements that link the justification to the entailment. Proof trees are constructed from a set of intuitively plausible deduction rules. We here report on a study in which we collected empirical frequency data on the understandability of the deduction rules, resulting in a facility index for each rule. This measure forms the basis for making a principled choice among alternative explanations, and identifying steps in the explanation that are likely to require extra elucidation
Algorithm for Adapting Cases Represented in a Tractable Description Logic
Case-based reasoning (CBR) based on description logics (DLs) has gained a lot
of attention lately. Adaptation is a basic task in the CBR inference that can
be modeled as the knowledge base revision problem and solved in propositional
logic. However, in DLs, it is still a challenge problem since existing revision
operators only work well for strictly restricted DLs of the \emph{DL-Lite}
family, and it is difficult to design a revision algorithm which is
syntax-independent and fine-grained. In this paper, we present a new method for
adaptation based on the DL . Following the idea of
adaptation as revision, we firstly extend the logical basis for describing
cases from propositional logic to the DL , and present a
formalism for adaptation based on . Then we present an
adaptation algorithm for this formalism and demonstrate that our algorithm is
syntax-independent and fine-grained. Our work provides a logical basis for
adaptation in CBR systems where cases and domain knowledge are described by the
tractable DL .Comment: 21 pages. ICCBR 201
Axiom Pinpointing
Axiom pinpointing refers to the task of finding the specific axioms in an
ontology which are responsible for a consequence to follow. This task has been
studied, under different names, in many research areas, leading to a
reformulation and reinvention of techniques. In this work, we present a general
overview to axiom pinpointing, providing the basic notions, different
approaches for solving it, and some variations and applications which have been
considered in the literature. This should serve as a starting point for
researchers interested in related problems, with an ample bibliography for
delving deeper into the details
Persuasive Explanation of Reasoning Inferences on Dietary Data
Explainable AI aims at building intelligent systems that are able to provide a clear, and human understandable, justification of their decisions. This holds for both rule-based and data-driven methods. In management of chronic diseases, the users of such systems are patients that follow strict dietary rules to manage such diseases. After receiving the input of the intake food, the system performs reasoning to understand whether the users follow an unhealthy behaviour. Successively, the system has to communicate the results in a clear and effective way, that is, the output message has to persuade users to follow the right dietary rules. In this paper, we address the main challenges to build such systems: i) the natural language generation of messages that explain the reasoner inconsistency; ii) the effectiveness of such messages at persuading the users. Results prove that the persuasive explanations are able to reduce the unhealthy users’ behaviours
- …