11 research outputs found

    Counterfactual explanations for student outcome prediction with Moodle footprints.

    Get PDF
    Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with similarity knowledge in order to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes to action a change in the decision is an interesting challenge for counterfactual explainers. In this paper we show how feature relevance based explainers (such as LIME), can be combined with a counterfactual explainer to identify the minimum sub-set of “actionable features”. We demonstrate our hybrid approach on a real-world use case on student outcome prediction using data from the Campus Moodle Virtual Learning environment. Our preliminary results demonstrate that counterfactual feature weighting to be a viable strategy that should be adopted to minimise the number of actionable changes

    iSee: intelligent sharing of explanation experiences.

    Get PDF
    The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of explanations. There is a growing armoury of XAI methods, interpreting ML models and explaining their predictions, recommendations and diagnoses. We refer to these collectively as "explanation strategies". As these explanation strategies mature, practitioners gain experience in understanding which strategies to deploy in different circumstances. What is lacking, and what the iSee project will address, is the science and technology for capturing, sharing and re-using explanation strategies based on similar user experiences, along with a much-needed route to explainable AI (XAI) compliance. Our vision is to improve every user's experience of AI, by harnessing experiences of best practice in XAI by providing an interactive environment where personalised explanation experiences are accessible to everyone. Video Link: https://youtu.be/81O6-q_yx0

    Mitigating gradient inversion attacks in federated learning with frequency transformation.

    No full text
    Centralised machine learning approaches have raised concerns regarding the privacy of client data. To address this issue, privacy-preserving techniques such as Federated Learning (FL) have emerged, where only updated gradients are communicated instead of the raw client data. However, recent advances in security research have revealed vulnerabilities in this approach, demonstrating that gradients can be targeted and reconstructed, compromising the privacy of local instances. Such attacks, known as gradient inversion attacks, include techniques like deep leakage gradients (DLG). In this work, we explore the implications of gradient inversion attacks in FL and propose a novel defence mechanism, called Pruned Frequency-based Gradient Defence (pFGD), to mitigate these risks. Our defence strategy combines frequency transformation using techniques such as Discrete Cosine Transform (DCT) and employs pruning on the gradients to enhance privacy preservation. In this study, we perform a series of experiments on the MNIST dataset to evaluate the effectiveness of pFGD in defending against gradient inversion attacks. Our results clearly demonstrate the resilience and robustness of pFGD to gradient inversion attacks. The findings stress the need for strong privacy techniques to counter attacks and protect client data

    FedSim: similarity guided model aggregation for federated learning.

    No full text
    Federated Learning (FL) is a distributed machine learning approach in which clients contribute to learning a global model in a privacy preserved manner. Effective aggregation of client models is essential to create a generalised global model. To what extent a client is generalisable and contributing to this aggregation can be ascertained by analysing inter-client relationships. We use similarity between clients to model such relationships. We explore how similarity knowledge can be inferred from comparing client gradients, instead of inferring similarity on the basis of client data which violates the privacy-preserving constraint in FL. The similarity-guided FedSim algorithm, introduced in this paper, decomposes FL aggregation into local and global steps. Clients with similar gradients are clustered to provide local aggregations, which thereafter can be globally aggregated to ensure better coverage whilst reducing variance. Our comparative study also investigates the applicability of FedSim in both real-world datasets and on synthetic datasets where statistical heterogeneity can be controlled and studied systematically. A comparative study of FedSim with state-of-the-art FL baselines, FedAvg and FedProx, clearly shows significant performance gains. Our findings confirm that by exploiting latent inter-client similarities, FedSim’s performance is significantly better and more stable compared to both these baselines

    Failure-driven transformational case reuse of explanation strategies in CloodCBR.

    No full text
    In this paper, we propose a novel approach to improve problem-solving efficiency through the reuse of case solutions. Specifically, we introduce the concept of failure-driven transformational case reuse of explanation strategies, which involves transforming suboptimal solutions using relevant components from nearest neighbours in sparse case bases. To represent these explanation strategies, we use behaviour trees and demonstrate their usefulness in solving similar problems. Our approach uses failures as a starting point for generating new solutions, analysing the causes and contributing factors to the failure. From this analysis, new solutions are generated through a nearest neighbour-based transformation of previous solutions, resulting in solutions that address the failure. We compare different approaches for reusing solutions of the nearest neighbours and empirically evaluate whether the transformed solutions meet the required explanation intents. Our proposed approach has the potential to significantly improve problem-solving efficiency in sparse case bases with complex case solutions

    How close is too close? Role of feature attributions in discovering counterfactual explanations.

    No full text
    Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A case-based approach to counterfactual discovery harnesses Nearest-unlike Neighbours as the basis to identify the minimal adaptations needed for outcome change. This paper presents the DisCERN algorithm which uses the query, its NUN and substitution-based adaptation operations to create a counterfactual explanation case. DisCERN uses feature attribution as adaptation knowledge to order substitutions operations and to bring about the desired outcome with as fewer changes as possible. We find our novel approach for Integrated Gradients using the NUN as the baseline against which the feature attributions are calculated outperforms other techniques like LIME and SHAP. DisCERN also uses feature attributions to bring the NUN closer by which the total change needed is further minimised, but the number of feature changes can increase. Overall, DisCERN outperforms other counterfactual algorithms such as DiCE and NICE in generating valid counterfactuals with fewer adaptations
    corecore