17 research outputs found

    Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development

    Get PDF

    Using fuzzy logic to integrate neural networks and knowledge-based systems

    Get PDF
    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems

    An analysis of the requirements traceability problem

    Get PDF
    In this paper1, we investigate and discuss the underlying nature of the requirements traceability problem. Our work is based on empirical studies, involving over 100 practitioners, and an evaluation of current support. We introduce the distinction between pre-requirements specification (pre-RS) traceability and post-requirements specification (post-RS) traceability, to demonstrate why an all-encompassing solution to the problem is unlikely, and to provide a framework through which to understand its multifaceted nature. We report how the majority of the problems attributed to poor requirements traceability are due to inadequate pre-RS traceability and show the fundamental need for improvements here. In the remainder of the paper, we present an analysis of the main barriers confronting such improvements in practice, identify relevant areas in which advances have been (or can be) made, and make recommendations for research

    'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions

    Full text link
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.Comment: 14 pages, 3 figures, ACM Conference on Human Factors in Computing Systems (CHI'18), April 21--26, Montreal, Canad

    Explainable expert systems: A research program in information processing

    Get PDF
    Our work in Explainable Expert Systems (EES) had two goals: to extend and enhance the range of explanations that expert systems can offer, and to ease their maintenance and evolution. As suggested in our proposal, these goals are complementary because they place similar demands on the underlying architecture of the expert system: they both require the knowledge contained in a system to be explicitly represented, in a high-level declarative language and in a modular fashion. With these two goals in mind, the Explainable Expert Systems (EES) framework was designed to remedy limitations to explainability and evolvability that stem from related fundamental flaws in the underlying architecture of current expert systems

    The Need for User Models in Generating Expert System Explanations

    Get PDF
    An explanation facility is an important component of an expert system, but current systems for the most part have neglected the importance of tailoring a system\u27s explanations to the user. This paper explores the role of user modeling in generating expert system explanations, making the claim that individualized user models are essential to produce good explanations when the system users vary in their knowledge of the domain, or in their goals, plans, and preferences. To make this argument, a characterization of explanation, and good explanation is made, leading to a presentation of how knowledge about the user affects the various aspects of a good explanation. Individualized user models are not only important, it is practical to obtain them. A method for acquiring a model of the user\u27s beliefs implicitly by eavesdropping on the interaction between user and system is presented, along with examples of how this information can be used to tailor an explanation
    corecore