18 research outputs found

    On the decomposition of tabular knowledge systems.

    Get PDF
    Recently there has been a growing interest in the decomposition of knowledge based systems and decision tables. Much work in this area has adopted an informal approach. In this paper, we first formalize the notion of decomposition, and then we study some interesting classes of decompositions. The proposed classification can be used to formulate design goals to master the decomposition of large decision tables into smaller components. Importantly, carrying out a decomposition eliminates redundant information from the knowledge base, thereby taking away -right from the beginning- a possible source of inconsistency. This, in turn, renders subsequent verification and validation more smoothly.Knowledge; Systems;

    A tool-supported approach to inter-tabular verification.

    Get PDF
    The use of decision tables to verify KBS has been advocated several times in the V&V literature. However, one of the main drawbacks of those system is that they fail to detect anomalies which occur over rule chains. In a decision table based context this means that anomalies which occur due to interactions between tables are neglected. These anomalies are called inter-tabular anomalies. In this paper we investigate an approach that deals with inter-tabular anomalies. One of the prerequisites for the approach was that it could be used by the knowledge engineer during the development of the KBS. This requires that the anomaly check can be performed on-line. As a result, the approach partly uses heuristics where exhaustive checks would be too inefficient. All detection facilities that will be described, have been implemented in a table-based development tool called PROLOGA. The use of this tool will be briefly illustrated. In addition, some experiences in verifying large knowledge bases are discussed.

    The Verification of Temporal KBS: SPARSE - A Case Study in Power Systems

    Get PDF
    In this paper we present VERITAS, a tool that focus time maintenance, that is one of the most important processes in the engineering of the time during the development of KBS. The verification and validation (V&V) process is part of a wider process denominated knowledge maintenance, in which an enterprise systematically gathers, organizes, shares, and analyzes knowledge to accomplish its goals and mission. The V&V process states if the software requirements specifications have been correctly and completely fulfilled. The methodologies proposed in software engineering have showed to be inadequate for Knowledge Based Systems (KBS) validation and verification, since KBS present some particular characteristics. VERITAS is an automatic tool developed for KBS verification which is able to detect a large number of knowledge anomalies. It addresses many relevant aspects considered in real applications, like the usage of rule triggering selection mechanisms and temporal reasoning

    A formal method for analyzing and integrating the rule-sets of multiple experts

    Full text link
    Although there has been a movement toward the use of multiple sources of knowledge for expert systems development, there are no formal methods to guide knowledge engineers in integrating these sources. Further approaches for dealing with problems of inaccuracies, inconsistencies, and imcompleteness are not widely discussed in literature. This paper discusses a formal method for documenting, integrating and normalizing knowledge-bases derived from different knowledge sources. A case study is used to demonstrate the effectiveness of the method.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/30250/1/0000645.pd

    Verification and validation of knowledge-based systems with an example from site selection.

    Get PDF
    In this paper, the verification and validation of Knowledge-Based Systems (KBS) using decision tables (DTs) is one of the central issues. It is illustrated using real-market data taken from industrial site selection problems.One of the main problems of KBS is that often there remain a lot of anomalies after the knowledge has been elicited. As a consequence, the quality of the KBS will degrade. This evaluation consists mainly of two parts: verification and validation (V&V). To make a distinction between verification and validation, the following phrase is regularly used: Verification deals with 'building the system right', while validation involves 'building the right system'. In the context of DTs, it has been claimed from the early years of DT research onwards that DTs are very suited for V&V purposes. Therefore, it will be explained how V&V of the modelled knowledge can be performed. In this respect, use is made of stated response modelling designs techniques to select decision rules from a DT. Our approach is illustrated using a case-study dealing with the locational problem of a (petro)chemical company in a port environment. The KBS developed has been named Matisse, which is an acronym of Matching Algorithm, a Technique for Industrial Site Selection and Evaluation.Selection; Systems;

    Fuzzy systems evaluation: The inference error approach

    Full text link

    Legal Means of Providing the Principle of Transparency of the Artificial Intelligence

    Get PDF
    Objective: to analyze the current technological and legal theories in order to define the content of the transparency principle of the artificial intelligence functioning from the viewpoint of legal regulation, choice of applicable means of legal regulation, and establishing objective limits to legal intervention into the technological sphere through regulatory impact.Methods: the methodological basis of the research is the set of general scientific (analysis, synthesis, induction, deduction) and specific legal (historical-legal, formal-legal, comparative-legal) methods of scientific cognition.Results: the author critically analyzed the norms and proposals for normative formalization of the artificial intelligence transparency principle from the viewpoint of impossibility to obtain the full technological transparency of artificial intelligence. It is proposed to discuss the variants of managing algorithmic transparency and accountability based on the analysis of social, technical and regulatory problems created by algorithmic systems of artificial intelligence. It is proved that transparency is an indispensible condition to recognize artificial intelligence as trustworthy. It is proved that transparency and explainability of the artificial intelligence technology is essential not only for personal data protection, but also in other situations of automated data processing, when, in order to make a decision, the technological data lacking in the input information are taken from open sources, including those not having the status of a personal data storage. It is proposed to legislatively stipulate the obligatory audit and to introduce a standard, stipulating a compromise between the technology abilities and advantages, accuracy and explainability of its result, and the rights of the participants of civil relations. Introduction of certification of the artificial intelligence models, obligatory for application, will solve the issues of liability of the subjects obliged to apply such systems. In the context of professional liability of professional subjects, such as doctors, militants, or corporate executives of a juridical person, it is necessary to restrict the obligatory application of artificial intelligence if sufficient transparency is not provided.Scientific novelty: the interdisciplinary character of the research allowed revealing the impossibility and groundlessness of the requirements to completely disclose the source code or architecture of the artificial intelligence models. The principle of artificial intelligence transparency may be satisfied through elaboration and provision of the right of the data subject and the subject, to whom the decision made as a result of automated data processing is addressed, to reject using automated data processing in decision-making, and the right to object to the decisions made in such a way.Practical significance: is due to the actual absence of sufficient regulation of the principle of transparency of artificial intelligence and results of its functioning, as well as the content and features of the implementation of the right to explanation the right to objection of the decision subject. The most fruitful way to establish trust towards artificial intelligence is to recognize this technology as a part of a complex sociotechnical system, which mediates trust, and to improve the reliability of these systems. The main provisions and conclusions of the research can be used to improve the legal mechanism of providing transparency of the artificial intelligence models applied in state governance and business

    An overview of decision table literature.

    Get PDF
    The present report contains an overview of the literature on decision tables since its origin. The goal is to analyze the dissemination of decision tables in different areas of knowledge, countries and languages, especially showing these that present the most interest on decision table use. In the first part a description of the scope of the overview is given. Next, the classification results by topic are explained. An abstract and some keywords are included for each reference, normally provided by the authors. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. Other examined topics are the theoretical or practical feature of each document, as well as its origin country and language. Finally, the main body of the paper consists of the ordered list of publications with abstract, classification and comments.

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.
    corecore