11 research outputs found

    A data-driven decision support framework for DEA target setting:an explainable AI approach

    Get PDF
    The intention of target setting for Decision-Making Units (DMUs) in Data Envelopment Analysis (DEA) is to perform better than their peers or reach a reference efficiency level. However, most of the time, the logic behind the target setting is based on mathematical models, which are not achievable in practice. Besides, these models are based on decreasing/increasing inputs/outputs that might not be feasible based on DMU's potential in the real world. We propose a data-driven decision support framework to set actionable and feasible targets based on vital inputs-outputs for target setting. To do so, DMUs are classified in their corresponding Efficiency Frontier (EF) levels based on multiple EFs approach and a machine learning classifier. Then, the vital inputs-outputs are determined using an Explainable Artificial Intelligence (XAI) method. Finally, a Multi-Objective Counterfactual Explanation is developed based on DEA (MOCE-DEA) to lead DMU in reaching the reference EF by adjusting actionable and feasible inputs-outputs. We studied Iranian hospitals to evaluate the proposed framework and presented two cases to demonstrate its mechanism. The results show that the performance of the DMUs is improved to reach the reference EF for studied cases. Then, a validation was conducted with the primal DEA model to show the robust improvement of DMUs after adjusting their original value based on the generated solutions by the proposed framework. It demonstrates that the adjusted values can also improve DMUs' performance in the primal DEA model.</p

    A data-driven decision support framework for DEA target setting:an explainable AI approach

    Get PDF
    The intention of target setting for Decision-Making Units (DMUs) in Data Envelopment Analysis (DEA) is to perform better than their peers or reach a reference efficiency level. However, most of the time, the logic behind the target setting is based on mathematical models, which are not achievable in practice. Besides, these models are based on decreasing/increasing inputs/outputs that might not be feasible based on DMU's potential in the real world. We propose a data-driven decision support framework to set actionable and feasible targets based on vital inputs-outputs for target setting. To do so, DMUs are classified in their corresponding Efficiency Frontier (EF) levels based on multiple EFs approach and a machine learning classifier. Then, the vital inputs-outputs are determined using an Explainable Artificial Intelligence (XAI) method. Finally, a Multi-Objective Counterfactual Explanation is developed based on DEA (MOCE-DEA) to lead DMU in reaching the reference EF by adjusting actionable and feasible inputs-outputs. We studied Iranian hospitals to evaluate the proposed framework and presented two cases to demonstrate its mechanism. The results show that the performance of the DMUs is improved to reach the reference EF for studied cases. Then, a validation was conducted with the primal DEA model to show the robust improvement of DMUs after adjusting their original value based on the generated solutions by the proposed framework. It demonstrates that the adjusted values can also improve DMUs' performance in the primal DEA model.</p

    Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model

    Full text link
    This empirical study proposes a novel methodology to measure users' perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users' mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretable Machine Learning (ML) model to classify suspected COVID-19 patients into positive or negative cases. Then, Medical Experts' (MEs) conduct a diagnostic decision-making task based on their knowledge and then prediction and interpretations provided by the XAI model. In order to evaluate the impact of interpretations on perceived trust, explanation satisfaction attributes are rated by MEs through a survey. Then, they are considered as FCM's concepts to determine their influences on each other and, ultimately, on the perceived trust. Moreover, to consider MEs' mental subjectivity, fuzzy linguistic variables are used to determine the strength of influences. After reaching the steady state of FCMs, a quantified value is obtained to measure the perceived trust of each ME. The results show that the quantified values can determine whether MEs trust or distrust the XAI model. We analyze this behavior by comparing the quantified values with MEs' performance in completing diagnostic tasks.Comment: Accepted in IJCAI 2023 Workshop on Explainable Artificial Intelligence (XAI

    Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model

    Get PDF
    This empirical study proposes a novel methodology to measure users' perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users' mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretable Machine Learning (ML) model to classify suspected COVID-19 patients into positive or negative cases. Then, Medical Experts' (MEs) conduct a diagnostic decision-making task based on their knowledge and then prediction and interpretations provided by the XAI model. In order to evaluate the impact of interpretations on perceived trust, explanation satisfaction attributes are rated by MEs through a survey. Then, they are considered as FCM's concepts to determine their influences on each other and, ultimately, on the perceived trust. Moreover, to consider MEs' mental subjectivity, fuzzy linguistic variables are used to determine the strength of influences. After reaching the steady state of FCMs, a quantified value is obtained to measure the perceived trust of each ME. The results show that the quantified values can determine whether MEs trust or distrust the XAI model. We analyze this behavior by comparing the quantified values with MEs' performance in completing diagnostic tasks

    Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model

    Get PDF
    This empirical study proposes a novel methodology to measure users’ perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users’ mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretable Machine Learning (ML) model to classify suspected COVID-19 patients into positive or negative cases. Then, Medical Experts (MEs) conduct a diagnostic decision-making task based on their knowledge and the predictions and interpretations provided by the XAI model. In order to evaluate the impact of interpretations on perceived trust, explanation satisfaction attributes are rated by MEs through a survey. Then, they are considered as FCM’s concepts to determine their influences on each other and, ultimately, on the perceived trust. Moreover, to consider MEs’ mental subjectivity, fuzzy linguistic variables are used to determine the strength of influences. After reaching the steady state of FCMs, a quantified value is obtained to measure the perceived trust of each ME. The results show that the quantified values can determine whether MEs trust or distrust the XAI model. We analyze this behavior by comparing the quantified values with MEs’ performance in completing diagnostic tasks

    Implementing bargaining game-based fuzzy cognitive map and mixed-motive games for group decisions in the healthcare supplier selection

    No full text
    Evaluating and selecting proper suppliers in the healthcare centers due to their high impact on the financial situation and citizens’ satisfaction is vital. The abundance of various criteria affecting the Supplier Selection (SS) problem makes it a decision-making problem. To this end, an approach according to the Bargaining Game-based Fuzzy Cognitive Map (BG-FCM) and mixed-motive games has been proposed for simultaneously modeling the SS complexity and suppliers’ competition in the market. First, according to the BG-FCM, the causal relationships between SS criteria have been determined. Then, by implementing Particle Swarm Optimization and the S-shaped transfer function (PSO-STF) and scenario-making, the BG-FCM is executed to extract robust payoffs for suppliers to compete. The competition between suppliers is modeled by mixed-motive games and their robust payoffs to determine their best strategies in the competition. Finally, suppliers compete with each other two by two, and suppliers with the most wins will have higher priority. The proposed approach has been applied in a general hospital to evaluate major suppliers for purchasing necessities. Then, it is compared with two well-known Multi-Criteria Decision Making (MCDM) approaches, showing a better performance in modeling the complexity and competition in the problem. The proposed approach can help the hospital select the most appropriate suppliers according to its preferences and avoid cooperating with inappropriate suppliers, which may cause a low-quality Supply Chain (SC) system or financial calamities.</p

    Trustworthy Artificial Intelligence in Medical Applications: A Mini Survey

    No full text
    Nowadays, a large amount of structured and unstructured data is being produced in various fields, creating tremendous opportunities to implement Machine Learning (ML) algorithms for decision-making. Although ML algorithms can outperform human performance in some fields, the black-box inherent characteristics of advanced models can hinder experts from exploiting them in sensitive domains such as medicine. The black-box nature of advanced ML models shadows the transparency of these algorithms, which could hamper their fair and robust performance due to the complexity of the algorithms. Consequently, individuals, organizations, and societies will not be able to achieve the full potential of ML without establishing trust in its development, deployment, and use. The field of eXplainable Artificial Intelligence (XAI) endeavors to solve this problem by providing human-understandable explanations for black-box models as a potential solution to acquire trustworthy AI. However, explainability is one of many requirements to fulfill trustworthy AI, and other prerequisites must also be met. Hence, this survey analyzes the fulfillment of five algorithmic requirements of accuracy, transparency, trust, robustness, and fairness through the lens of the literature in the medical domain. Regarding that medical experts are reluctant to put their judgment aside in favor of a machine, trustworthy AI algorithmic fulfillment could be a way to convince them to use ML. The results show there is still a long way to implement the algorithmic requirements in practice, and scholars need to consider them in future studies

    Trustworthy Artificial Intelligence in Medical Applications: A Mini Survey

    No full text
    Nowadays, a large amount of structured and unstructured data is being produced in various fields, creating tremendous opportunities to implement Machine Learning (ML) algorithms for decision-making. Although ML algorithms can outperform human performance in some fields, the black-box inherent characteristics of advanced models can hinder experts from exploiting them in sensitive domains such as medicine. The black-box nature of advanced ML models shadows the transparency of these algorithms, which could hamper their fair and robust performance due to the complexity of the algorithms. Consequently, individuals, organizations, and societies will not be able to achieve the full potential of ML without establishing trust in its development, deployment, and use. The field of eXplainable Artificial Intelligence (XAI) endeavors to solve this problem by providing human-understandable explanations for black-box models as a potential solution to acquire trustworthy AI. However, explainability is one of many requirements to fulfill trustworthy AI, and other prerequisites must also be met. Hence, this survey analyzes the fulfillment of five algorithmic requirements of accuracy, transparency, trust, robustness, and fairness through the lens of the literature in the medical domain. Regarding that medical experts are reluctant to put their judgment aside in favor of a machine, trustworthy AI algorithmic fulfillment could be a way to convince them to use ML. The results show there is still a long way to implement the algorithmic requirements in practice, and scholars need to consider them in future studies

    Trustworthy Artificial Intelligence in Medical Applications: A Mini Survey

    No full text
    Nowadays, a large amount of structured and unstructured data is being produced in various fields, creating tremendous opportunities to implement Machine Learning (ML) algorithms for decision-making. Although ML algorithms can outperform human performance in some fields, the black-box inherent characteristics of advanced models can hinder experts from exploiting them in sensitive domains such as medicine. The black-box nature of advanced ML models shadows the transparency of these algorithms, which could hamper their fair and robust performance due to the complexity of the algorithms. Consequently, individuals, organizations, and societies will not be able to achieve the full potential of ML without establishing trust in its development, deployment, and use. The field of eXplainable Artificial Intelligence (XAI) endeavors to solve this problem by providing human-understandable explanations for black-box models as a potential solution to acquire trustworthy AI. However, explainability is one of many requirements to fulfill trustworthy AI, and other prerequisites must also be met. Hence, this survey analyzes the fulfillment of five algorithmic requirements of accuracy, transparency, trust, robustness, and fairness through the lens of the literature in the medical domain. Regarding that medical experts are reluctant to put their judgment aside in favor of a machine, trustworthy AI algorithmic fulfillment could be a way to convince them to use ML. The results show there is still a long way to implement the algorithmic requirements in practice, and scholars need to consider them in future studies

    Enhancing risk assessment of manufacturing production process integrating failure modes and sequential fuzzy cognitive map

    Get PDF
    When a risk occurs in a stage of the production process, it can be due to the risks of the previous stages, or it is effective in causing the risks in the later stages. The current paper proposes an intelligent approach based on cause-and-effect relationships to assess and prioritize a manufacturing unit’s risks. Sequential multi-stage fuzzy cognitive maps (MSFCMs) are used for drawing the map of risks. Then, the learning algorithm is implemented for learning the MSFCM and finalizing the risks score. A case study on an auto-parts manufacturing unit is applied to demonstrate the capabilities of the proposed approach
    corecore