thesis

Developing a computational framework for explanation generation in knowledge-based systems and its application in automated feature recognition

Abstract

A Knowledge-Based System (KBS) is essentially an intelligent computer system which explicitly or tacitly possesses a knowledge repository that helps the system solve problems. Researches focusing on building KBSs for industrial applications to improve design quality and shorten research cycle are increasingly attracting interests. For the early models, explanability is considered as one of the major benefits of using KBSs since that most of them are generally rule-based systems and the explanation can be generated based on the rule traces of the reasoning behaviors. With the development of KBS, the definition of knowledge base is becoming much more general than just using rules, and the techniques used to solve problems in KBS are far more than just rule-based reasoning. Many Artificial Intelligence (AI) techniques are introduced, such as neural network, genetic algorithm, etc. The effectiveness and efficiency of KBS are thus improved. However, as a trade-off, the explanability of KBS is weakened. More and more KBSs are conceived as black-box systems that do not run transparently to users, resulting in loss of trusts for the KBSs. Developing an explanation model for modern KBSs has a positive impact on user acceptance of the KBSs and the advices they provided. This thesis proposes a novel computational framework for explanation generation in KBS. Different with existing models which are usually built inside a KBS and generate explanations based on the actual decision making process, the explanation model in our framework stands outside the KBS and attempts to generate explanations through the production of an alternative justification that is unrelated to the actual decision making process used by the system. In this case, the knowledge and reasoning approaches in the explanation model can be optimized specially for explanation generation. The quality of explanation is thus improved. Another contribution in this study is that the system aims to cover three types of explanations (where most of the existing models only focus on the first two): 1) decision explanation, which helps users understand how a KBS reached its conclusion; 2) domain explanation, which provides detailed descriptions of the concepts and relationships within the domain; 3) software diagnostic, which diagnoses user observations of unexpected behaviors of the system or some relevant domain phenomena. The framework is demonstrated with a case of Automated Feature Recognition (AFR). The resulting explanatory system uses Semantic Web languages to implement an individual knowledge base only for explanatory purpose, and integrates a novel reasoning approach for generating explanations. The system is tested with an industrial STEP file, and delivers good quality explanations for user queries about how a certain feature is recognized

    Similar works