33 research outputs found

    Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems

    Get PDF
    We propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the standardization process of trustworthy AI systems. We illustrate the proposed metric using the high-stake medical AI application of Predicting Ductal Carcinoma in Situ (DCIS) Recurrence. Our metric successfully captures the explainability of AI systems in DCIS recurrence by experts

    Trustworthy Acceptance: A New Metric for Trustworthy Artificial Intelligence Used in Decision Making in Food–Energy–Water Sectors

    Get PDF
    We propose, for the first time, a trustworthy acceptance metric and its measurement methodology to evaluate the trustworthiness of AI-based systems used in decision making in Food Energy Water (FEW) management. The proposed metric is a significant step forward in the standardization process of AI systems. It is essential to standardize the AI systems’ trustworthiness, but until now, the standardization efforts remain at the level of high-level principles. The measurement methodology of the proposed includes human experts in the loop, and it is based on our trust management system. Our metric captures and quantifies the system’s transparent evaluation by field experts on as many control points as desirable by the users. We illustrate the trustworthy acceptance metric and its measurement methodology using AI in decision-making scenarios of Food-Energy-Water sectors. However, the proposed metric and its methodology can be easily adapted to other fields of AI applications. We show that our metric successfully captures the aggregated acceptance of any number of experts, can be used to do multiple measurements on various points of the system, and provides confidence values for the measured acceptance

    Outcomes from elective colorectal cancer surgery during the SARS-CoV-2 pandemic

    Get PDF
    This study aimed to describe the change in surgical practice and the impact of SARS-CoV-2 on mortality after surgical resection of colorectal cancer during the initial phases of the SARS-CoV-2 pandemic

    Trustworthy and Causal Artificial Intelligence in Environmental Decision Making

    No full text
    Indiana University-Purdue University Indianapolis (IUPUI)We present a framework for Trustworthy Artificial Intelligence (TAI) that dynamically assesses trust and scrutinizes past decision-making, aiming to identify both individual and community behavior. The modeling of behavior incorporates proposed concepts, namely trust pressure and trust sensitivity, laying the foundation for predicting future decision-making regarding community behavior, consensus level, and decision-making duration. Our framework involves the development and mathematical modeling of trust pressure and trust sensitivity, drawing on social validation theory within the context of environmental decision-making. To substantiate our approach, we conduct experiments encompassing (i) dynamic trust sensitivity to reveal the impact of learning actors between decision-making, (ii) multi-level trust measurements to capture disruptive ratings, and (iii) different distributions of trust sensitivity to emphasize the significance of individual progress as well as overall progress. Additionally, we introduce TAI metrics, trustworthy acceptance, and trustworthy fairness, designed to evaluate the acceptance of decisions proposed by AI or humans and the fairness of such proposed decisions. The dynamic trust management within the framework allows these TAI metrics to discern support for decisions among individuals with varying levels of trust. We propose both the metrics and their measurement methodology as contributions to the standardization of trustworthy AI. Furthermore, our trustability metric incorporates reliability, resilience, and trust to evaluate systems with multiple components. We illustrate experiments showcasing the effects of different trust declines on the overall trustability of the system. Notably, we depict the trade-off between trustability and cost, resulting in net utility, which facilitates decision-making in systems and cloud security. This represents a pivotal step toward an artificial control model involving multiple agents engaged in negotiation. Lastly, the dynamic management of trust and trustworthy acceptance, particularly in varying criteria, serves as a foundation for causal AI by providing inference methods. We outline a mechanism and present an experiment on human-driven causal inference, where participant discussions act as interventions, enabling counterfactual evaluations once actor and community behavior are modeled

    Requirements for Trustworthy Artificial Intelligence – A Review

    No full text
    The field of algorithmic decision-making, particularly Artificial Intelligence (AI), has been drastically changing. With the availability of a massive amount of data and an increase in the processing power, AI systems have been used in a vast number of high-stake applications. So, it becomes vital to make these systems reliable and trustworthy. Different approaches have been proposed to make theses systems trustworthy. In this paper, we have reviewed these approaches and summarized them based on the principles proposed by the European Union for trustworthy AI. This review provides an overview of different principles that are important to make AI trustworthy

    Trustability for Resilient Internet of Things Services on 5G Multiple Access Edge Cloud Computing

    No full text
    Billions of Internet of Things (IoT) devices and sensors are expected to be supported by fifth-generation (5G) wireless cellular networks. This highly connected structure is predicted to attract different and unseen types of attacks on devices, sensors, and networks that require advanced mitigation strategies and the active monitoring of the system components. Therefore, a paradigm shift is needed, from traditional prevention and detection approaches toward resilience. This study proposes a trust-based defense framework to ensure resilient IoT services on 5G multi-access edge computing (MEC) systems. This defense framework is based on the trustability metric, which is an extension of the concept of reliability and measures how much a system can be trusted to keep a given level of performance under a specific successful attack vector. Furthermore, trustability is used as a trade-off with system cost to measure the net utility of the system. Systems using multiple sensors with different levels of redundancy were tested, and the framework was shown to measure the trustability of the entire system. Furthermore, different types of attacks were simulated on an edge cloud with multiple nodes, and the trustability was compared to the capabilities of dynamic node addition for the redundancy and removal of untrusted nodes. Finally, the defense framework measured the net utility of the service, comparing the two types of edge clouds with and without the node deactivation capability. Overall, the proposed defense framework based on trustability ensures a satisfactory level of resilience for IoT on 5G MEC systems, which serves as a trade-off with an accepted cost of redundant resources under various attacks

    Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems

    No full text
    We propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the standardization process of trustworthy AI systems. We illustrate the proposed metric using the high-stake medical AI application of Predicting Ductal Carcinoma in Situ (DCIS) Recurrence. Our metric successfully captures the explainability of AI systems in DCIS recurrence by experts

    A Trustworthy Human–Machine framework for collective decision making in Food–Energy–Water management: The role of trust sensitivity

    Get PDF
    We propose a hybrid Trustworthy Human–Machine collective decision-making framework to manage Food–Energy–Water (FEW) resources. Decisions for managing such resources impact not only the environment but also influence the economic productivity of FEW sectors and the well-being of society. Therefore, while algorithms can be used to develop optimal solutions under various criteria, it is essential to explain such solutions to the community. More importantly, the community should accept such solutions to be able realistically to apply them. In our collaborative computational framework for decision support, machines and humans interact to converge on the best solutions accepted by the community. In this framework, trust among human actors during decision making is measured and managed using a novel trust management framework. Furthermore, such trust is used to encourage human actors, depending on their trust sensitivity, to choose among the solutions generated by algorithms that satisfy the community’s preferred trade-offs among various objectives. In this paper, we show different scenarios of decision making with continuous and discrete solutions. Then, we propose a game-theory approach where actors maximize their payoff regarding their share and trust weighted by their trust sensitivity. We run simulations for decision-making scenarios with actors having different distributions of trust sensitivities. Results showed that when actors have high trust sensitivity, a consensus is reached 52% faster than scenarios with low trust sensitivity. The utilization of ratings of ratings increased the solution trustworthiness by 50%. Also, the same level of solution trustworthiness is reached 2.7 times faster when ratings of ratings included
    corecore