21 research outputs found

    Twin‐engined diagnosis of discrete‐event systems

    Get PDF
    Diagnosis of discrete-event systems (DESs) is computationally complex. This is why a variety of knowledge compilation techniques have been proposed, the most notable of them rely on a diagnoser. However, the construction of a diagnoser requires the generation of the whole system space, thereby making the approach impractical even for DESs of moderate size. To avoid total knowledge compilation while preserving efficiency, a twin-engined diagnosis technique is proposed in this paper, which is inspired by the two operational modes of the human mind. If the symptom of the DES is part of the knowledge or experience of the diagnosis engine, then Engine 1 allows for efficient diagnosis. If, instead, the symptom is unknown, then Engine 2 comes into play, which is far less efficient than Engine 1. Still, the experience acquired by Engine 2 is then integrated into the symptom dictionary of the DES. This way, if the same diagnosis problem arises anew, then it will be solved by Engine 1 in linear time. The symptom dic- tionary can also be extended by specialized knowledge coming from scenarios, which are the most critical/probable behavioral patterns of the DES, which need to be diagnosed quickly

    Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations

    Get PDF
    A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI - rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users' reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI's prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user's task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user's decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI. Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data

    Fair Feature Importance Scores for Interpreting Tree-Based Methods and Surrogates

    Full text link
    Across various sectors such as healthcare, criminal justice, national security, finance, and technology, large-scale machine learning (ML) and artificial intelligence (AI) systems are being deployed to make critical data-driven decisions. Many have asked if we can and should trust these ML systems to be making these decisions. Two critical components are prerequisites for trust in ML systems: interpretability, or the ability to understand why the ML system makes the decisions it does, and fairness, which ensures that ML systems do not exhibit bias against certain individuals or groups. Both interpretability and fairness are important and have separately received abundant attention in the ML literature, but so far, there have been very few methods developed to directly interpret models with regard to their fairness. In this paper, we focus on arguably the most popular type of ML interpretation: feature importance scores. Inspired by the use of decision trees in knowledge distillation, we propose to leverage trees as interpretable surrogates for complex black-box ML models. Specifically, we develop a novel fair feature importance score for trees that can be used to interpret how each feature contributes to fairness or bias in trees, tree-based ensembles, or tree-based surrogates of any complex ML system. Like the popular mean decrease in impurity for trees, our Fair Feature Importance Score is defined based on the mean decrease (or increase) in group bias. Through simulations as well as real examples on benchmark fairness datasets, we demonstrate that our Fair Feature Importance Score offers valid interpretations for both tree-based ensembles and tree-based surrogates of other ML systems

    Synthesis report

    Get PDF
    corecore