20 research outputs found

    Concise Justifications Versus Detailed Proofs for Description Logic Entailments

    Get PDF
    We discuss explanations in Description Logics (DLs), a family of logics used for knowledge representation. Initial work on explaining consequences for DLs had focused on justifications, which are minimal subsets of axioms that entail the consequence. More recently, it was proposed that proofs can provide more detailed information about why a consequence follows. Moreover, several measures have been proposed to estimate the comprehensibility of justifications and proofs, for example, their size or the complexity of logical expressions. In this paper, we analyze the connection between these measures, e.g. whether small justifications necessarily give rise to small proofs. We use a dataset of DL proofs that was constructed last year based on the ontologies of the OWL Reasoner Evaluation 2015. We find that, in general, less complex justifications indeed correspond to less complex proofs, and discuss some exceptions to this rule

    Optimisation of Terminological Reasoning

    Get PDF
    When reasoning in description, modal or temporal logics it is often useful to consider axioms representing universal truths in the domain of discourse. Reasoning with respect to an arbitrary set of axioms is hard, even for relatively inexpressive logics, and it is essential to deal with such axioms in an efficient manner if implemented systems are to be effective in real applications. This is particularly relevant to Description Logics, where subsumption reasoning with respect to a terminology is a fundamental problem. Two optimisation techniques that have proved to be particularly effective in dealing with terminologies are lazy unfolding and absorption. In this paper we seek to improve our theoretical understanding of these important techniques. We define a formal framework that allows the techniques to be precisely described, establish conditions under which they can be safely applied, and prove that, provided these conditions are respected, subsumption testing algorithms will still function correctly. These results are used to show that the procedures used in the FaCT system are correct and, moreover, to show how effiency an be significantly improved, while still retaining the guarantee of correctness, by relaxing the safety conditions for absorption.An extended abstract of this report was submitted to the Seventh International Conference on Principles of Knowledge Representation and Reasoning (KR2000)

    Explaining Query Answers under Inconsistency-Tolerant Semantics over Description Logic Knowledge Bases (Extended Abstract)

    Get PDF
    The problem of querying description logic (DL) knowledge bases (KBs) using database-style queries (in particular, conjunctive queries) has been a major focus of recent DL research. Since scalability is a key concern, much of the work has focused on lightweight DLs for which query answering can be performed in polynomial time w.r.t. the size of the ABox. The DL-Lite family of lightweight DLs [10] is especially popular due to the fact that query answering can be reduced, via query rewriting, to the problem of standard database query evaluation. Since the TBox is usually developed by experts and subject to extensive debugging, it is often reasonable to assume that its contents are correct. By contrast, the ABox is typically substantially larger and subject to frequent modifications, making errors almost inevitable. As such errors may render the KB inconsistent, several inconsistency-tolerant semantics have been introduced in order to provide meaningful answers to queries posed over inconsistent KBs. Arguably the most well-known is the AR semantics [17], inspired by work on consistent query answering in databases (cf. [4] for a survey). Query answering under AR semantics amounts to considering those answers (w.r.t. standard semantics) that can be obtained from every repair, the latter being defined as an inclusion-maximal subset of the ABox that is consistent with the TBox. A more cautious semantics, called IAR semantics The need to equip reasoning systems with explanation services is widely acknowledged by the DL community. Indeed, there have been numerous works on axiom pinpointing, in which the objective is to identify (minimal) subsets of a KB that entail a given TBox axiom (or ABox assertion

    Why Not? Explaining Missing Entailments with Evee (Technical Report)

    Full text link
    Understanding logical entailments derived by a description logic reasoner is not always straight-forward for ontology users. For this reason, various methods for explaining entailments using justifications and proofs have been developed and implemented as plug-ins for the ontology editor Prot\'eg\'e. However, when the user expects a missing consequence to hold, it is equally important to explain why it does not follow from the ontology. In this paper, we describe a new version of EVEE\rm E{\scriptsize VEE}, a Prot\'eg\'e plugin that now also provides explanations for missing consequences, via existing and new techniques based on abduction and counterexamples

    Persuasive Explanation of Reasoning Inferences on Dietary Data

    Get PDF
    Explainable AI aims at building intelligent systems that are able to provide a clear, and human understandable, justification of their decisions. This holds for both rule-based and data-driven methods. In management of chronic diseases, the users of such systems are patients that follow strict dietary rules to manage such diseases. After receiving the input of the intake food, the system performs reasoning to understand whether the users follow an unhealthy behaviour. Successively, the system has to communicate the results in a clear and effective way, that is, the output message has to persuade users to follow the right dietary rules. In this paper, we address the main challenges to build such systems: i) the natural language generation of messages that explain the reasoner inconsistency; ii) the effectiveness of such messages at persuading the users. Results prove that the persuasive explanations are able to reduce the unhealthy users’ behaviours
    corecore