2,035 research outputs found

    Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

    Full text link
    One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.Comment: 8 pages, 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 201

    Artificial Intelligence and Patient-Centered Decision-Making

    Get PDF
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient

    Empowering recommender systems using automatically generated Knowledge Graphs and Reinforcement Learning

    Full text link
    Personalized recommendations have a growing importance in direct marketing, which motivates research to enhance customer experiences by knowledge graph (KG) applications. For example, in financial services, companies may benefit from providing relevant financial articles to their customers to cultivate relationships, foster client engagement and promote informed financial decisions. While several approaches center on KG-based recommender systems for improved content, in this study we focus on interpretable KG-based recommender systems for decision making.To this end, we present two knowledge graph-based approaches for personalized article recommendations for a set of customers of a large multinational financial services company. The first approach employs Reinforcement Learning and the second approach uses the XGBoost algorithm for recommending articles to the customers. Both approaches make use of a KG generated from both structured (tabular data) and unstructured data (a large body of text data).Using the Reinforcement Learning-based recommender system we could leverage the graph traversal path leading to the recommendation as a way to generate interpretations (Path Directed Reasoning (PDR)). In the XGBoost-based approach, one can also provide explainable results using post-hoc methods such as SHAP (SHapley Additive exPlanations) and ELI5 (Explain Like I am Five).Importantly, our approach offers explainable results, promoting better decision-making. This study underscores the potential of combining advanced machine learning techniques with KG-driven insights to bolster experience in customer relationship management.Comment: Accepted at KDD (OARS) 2023 [https://oars-workshop.github.io/

    Reinforced Path Reasoning for Counterfactual Explainable Recommendation

    Full text link
    Counterfactual explanations interpret the recommendation mechanism via exploring how minimal alterations on items or users affect the recommendation decisions. Existing counterfactual explainable approaches face huge search space and their explanations are either action-based (e.g., user click) or aspect-based (i.e., item description). We believe item attribute-based explanations are more intuitive and persuadable for users since they explain by fine-grained item demographic features (e.g., brand). Moreover, counterfactual explanation could enhance recommendations by filtering out negative items. In this work, we propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations meanwhile to boost recommendation performance. Our CERec optimizes an explanation policy upon uniformly searching candidate counterfactuals within a reinforcement learning environment. We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph. We also deploy the explanation policy to a recommendation model to enhance the recommendation. Extensive explainability and recommendation evaluations demonstrate CERec's ability to provide explanations consistent with user preferences and maintain improved recommendations. We release our code at https://github.com/Chrystalii/CERec

    KnAC: an approach for enhancing cluster analysis with background knowledge and explanations

    Get PDF
    Pattern discovery in multidimensional data sets has been the subject of research for decades. There exists a wide spectrum of clustering algorithms that can be used for this purpose. However, their practical applications share a common post-clustering phase, which concerns expert-based interpretation and analysis of the obtained results. We argue that this can be the bottleneck in the process, especially in cases where domain knowledge exists prior to clustering. Such a situation requires not only a proper analysis of automatically discovered clusters but also conformance checking with existing knowledge. In this work, we present Knowledge Augmented Clustering (KnAC). Its main goal is to confront expert-based labelling with automated clustering for the sake of updating and refining the former. Our solution is not restricted to any existing clustering algorithm. Instead, KnAC can serve as an augmentation of an arbitrary clustering algorithm, making the approach robust and a model-agnostic improvement of any state-of-the-art clustering method. We demonstrate the feasibility of our method on artificially, reproducible examples and in a real life use case scenario. In both cases, we achieved better results than classic clustering algorithms without augmentation.Comment: Accepted to Applied Intelligenc
    corecore