24,884 research outputs found

    The driving factors of corporate carbon emissions: An application of the LASSO model with survey data

    Get PDF
    Corporate carbon performance is a key driver of achieving corporate sustainability. The identification of factors that influence corporate carbon emissions is fundamental to promoting carbon performance. Based on the carbon disclosure project (CDP) database, we integrate the least absolute shrinkage and selection operator (LASSO) regression model and the fixed-effect model to identify the determinants of carbon emissions. Furthermore, we rank determining factors according to their importance. We find that Capx enters the models under all carbon contexts. For Scope 1 and Scope 2, financial-level factors play a greater role. For Scope 3, corporate internal incentive policies and emission reduction behaviors are important. Different from absolute carbon emissions, for relative carbon emissions, the financial-level factors’ debt-paying ability is a vital reference indicator for the impact of corporate carbon emissions

    A Comparison of Explanations Given by Explainable Artificial Intelligence Methods on Analysing Electronic Health Records

    Get PDF
    eXplainable Artificial Intelligence (XAI) aims to provide intelligible explanations to users. XAI algorithms such as SHAP, LIME and Scoped Rules compute feature importance for machine learning predictions. Although XAI has attracted much research attention, applying XAI techniques in healthcare to inform clinical decision making is challenging. In this paper, we provide a comparison of explanations given by XAI methods as a tertiary extension in analysing complex Electronic Health Records (EHRs). With a large-scale EHR dataset, we compare features of EHRs in terms of their prediction importance estimated by XAI models. Our experimental results show that the studied XAI methods circumstantially generate different top features; their aberrations in shared feature importance merit further exploration from domain-experts to evaluate human trust towards XAI

    Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark

    Full text link
    In recent years, Explainable AI (xAI) attracted a lot of attention as various countries turned explanations into a legal right. xAI allows for improving models beyond the accuracy metric by, e.g., debugging the learned pattern and demystifying the AI's behavior. The widespread use of xAI brought new challenges. On the one hand, the number of published xAI algorithms underwent a boom, and it became difficult for practitioners to select the right tool. On the other hand, some experiments did highlight how easy data scientists could misuse xAI algorithms and misinterpret their results. To tackle the issue of comparing and correctly using feature importance xAI algorithms, we propose Compare-xAI, a benchmark that unifies all exclusive functional testing methods applied to xAI algorithms. We propose a selection protocol to shortlist non-redundant functional tests from the literature, i.e., each targeting a specific end-user requirement in explaining a model. The benchmark encapsulates the complexity of evaluating xAI methods into a hierarchical scoring of three levels, namely, targeting three end-user groups: researchers, practitioners, and laymen in xAI. The most detailed level provides one score per test. The second level regroups tests into five categories (fidelity, fragility, stability, simplicity, and stress tests). The last level is the aggregated comprehensibility score, which encapsulates the ease of correctly interpreting the algorithm's output in one easy to compare value. Compare-xAI's interactive user interface helps mitigate errors in interpreting xAI results by quickly listing the recommended xAI solutions for each ML task and their current limitations. The benchmark is made available at https://karim-53.github.io/cxai

    Decision support for efficient XAI services - A morphological analysis, business model archetypes, and a decision tree

    Get PDF
    The black-box nature of Artificial Intelligence (AI) models and their associated explainability limitations create a major adoption barrier. Explainable Artificial Intelligence (XAI) aims to make AI models more transparent to address this challenge. Researchers and practitioners apply XAI services to explore relationships in data, improve AI methods, justify AI decisions, and control AI technologies with the goals to improve knowledge about AI and address user needs. The market volume of XAI services has grown significantly. As a result, trustworthiness, reliability, transferability, fairness, and accessibility are required capabilities of XAI for a range of relevant stakeholders, including managers, regulators, users of XAI models, developers, and consumers. We contribute to theory and practice by deducing XAI archetypes and developing a user-centric decision support framework to identify the XAI services most suitable for the requirements of relevant stakeholders. Our decision tree is founded on a literature-based morphological box and a classification of real-world XAI services. Finally, we discussed archetypical business models of XAI services and exemplary use cases

    Why Should I Choose You? AutoXAI: A Framework for Selecting and Tuning eXplainable AI Solutions

    Full text link
    In recent years, a large number of XAI (eXplainable Artificial Intelligence) solutions have been proposed to explain existing ML (Machine Learning) models or to create interpretable ML models. Evaluation measures have recently been proposed and it is now possible to compare these XAI solutions. However, selecting the most relevant XAI solution among all this diversity is still a tedious task, especially when meeting specific needs and constraints. In this paper, we propose AutoXAI, a framework that recommends the best XAI solution and its hyperparameters according to specific XAI evaluation metrics while considering the user's context (dataset, ML model, XAI needs and constraints). It adapts approaches from context-aware recommender systems and strategies of optimization and evaluation from AutoML (Automated Machine Learning). We apply AutoXAI to two use cases, and show that it recommends XAI solutions adapted to the user's needs with the best hyperparameters matching the user's constraints.Comment: 16 pages, 7 figures, to be published in CIKM202

    Designing Gamification Concepts for Expert Explainable Artificial Intelligence Evaluation Tasks: A Problem Space Exploration

    Get PDF
    Artificial intelligence (AI) models are often complex and require additional explanations for use in high-stakes decision-making contexts like healthcare. To this end, explainable AI (XAI) developers must evaluate their explanations with domain experts to ensure understandability. As these evaluations are tedious and repetitive, we look at gamification as a means to motivate and engage experts in XAI evaluation tasks. We explore the problem space associated with gamified expert XAI evaluation. Based on a literature review of 22 relevant studies and seven interviews with experts in XAI evaluation, we elicit knowledge about affected stakeholders, eight needs, eight goals, and seven requirements. Our results help us understand better the problems associated with expert XAI evaluation and paint a broad application potential for gamification to improve XAI expert evaluations. In doing so, we lay the foundation for the design of successful gamification concepts for expert XAI evaluation

    Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study

    Full text link
    While the emerging research field of explainable artificial intelligence (XAI) claims to address the lack of explainability in high-performance machine learning models, in practice, XAI targets developers rather than actual end-users. Unsurprisingly, end-users are often unwilling to use XAI-based decision support systems. Similarly, there is limited interdisciplinary research on end-users' behavior during XAI explanations usage, rendering it unknown how explanations may impact cognitive load and further affect end-user performance. Therefore, we conducted an empirical study with 271 prospective physicians, measuring their cognitive load, task performance, and task time for distinct implementation-independent XAI explanation types using a COVID-19 use case. We found that these explanation types strongly influence end-users' cognitive load, task performance, and task time. Further, we contextualized a mental efficiency metric, ranking local XAI explanation types best, to provide recommendations for future applications and implications for sociotechnical XAI research.Comment: Thirty-first European Conference on Information Systems (ECIS 2023
    • …
    corecore