6 research outputs found

    Performance is not enough: the story told by a Rashomon quartet

    Full text link
    Predictive modelling is often reduced to finding the best model that optimizes a selected performance measure. But what if the second-best model describes the data in a completely different way? What about the third-best? Is it possible that the equally effective models describe different relationships in the data? Inspired by Anscombe's quartet, this paper introduces a Rashomon quartet, a four models built on synthetic dataset which have practically identical predictive performance. However, their visualization reveals distinct explanations of the relation between input variables and the target variable. The illustrative example aims to encourage the use of visualization to compare predictive models beyond their performance

    Explainable AI with counterfactual paths

    Full text link
    Explainable AI (XAI) is an increasingly important area of research in machine learning, which in principle aims to make black-box models transparent and interpretable. In this paper, we propose a novel approach to XAI that uses counterfactual paths generated by conditional permutations. Our method provides counterfactual explanations by identifying alternative paths that could have led to different outcomes. The proposed method is particularly suitable for generating explanations based on counterfactual paths in knowledge graphs. By examining hypothetical changes to the input data in the knowledge graph, we can systematically validate the behaviour of the model and examine the features or combination of features that are most important to the model's predictions. Our approach provides a more intuitive and interpretable explanation for the model's behaviour than traditional feature weighting methods and can help identify and mitigate biases in the model
    corecore