29 research outputs found

    Anomaly Attribution with Likelihood Compensation

    Full text link
    This paper addresses the task of explaining anomalous predictions of a black-box regression model. When using a black-box model, such as one to predict building energy consumption from many sensor measurements, we often have a situation where some observed samples may significantly deviate from their prediction. It may be due to a sub-optimal black-box model, or simply because those samples are outliers. In either case, one would ideally want to compute a ``responsibility score'' indicative of the extent to which an input variable is responsible for the anomalous output. In this work, we formalize this task as a statistical inverse problem: Given model deviation from the expected value, infer the responsibility score of each of the input variables. We propose a new method called likelihood compensation (LC), which is founded on the likelihood principle and computes a correction to each input variable. To the best of our knowledge, this is the first principled framework that computes a responsibility score for real valued anomalous model deviations. We apply our approach to a real-world building energy prediction task and confirm its utility based on expert feedback.Comment: 8 pages, 7 figure

    AI Governance in Healthcare: Explainability Standards, Safety Protocols, and Human-AI Interactions Dynamics in Contemporary Medical AI Systems

    Get PDF
    The fast-growing incorporation of artificial intelligence (AI) into the modern healthcare industry necessitates immediate consideration of its legal and ethical dimensions. In this research, we focused on three principal areas requiring specific, contextual direction from both governmental entities and industry participants to guide the responsible and ethical progression of AI in healthcare. First, the research discusses standards for explainability. Within healthcare, understanding AI-driven decisions is vital because of their profound implications for human health. Various participants, from patients to oversight bodies, require differing levels of transparency and explanation from AI systems. Next, we examine safety protocols. Given that employing AI in healthcare could result in decisions that carry severe ramifications, we argue for evaluating its objective criteria, search parameters, training applicability, risk for of poor data, and possible risks. Finally, the dynamics of human-AI interaction were discussed. Optimal interaction necessitates the creation of AI systems that augment human capabilities and acknowledge human cognitive processes. The involvement of AI system users in healthcare, defined through tiers of understanding, contribution, and oversight, spans from elementary to advanced engagements. Each tier relates to the depth of comprehension, the scope of data contribution, and the level of oversight exercised by the healthcare specialist regarding the AI instrument. This research emphasizes the necessity for specific guidelines for each of the three dimensions to guarantee the secure, ethical, and efficient utilization of AI in healthcare

    Using Model Explanations to Guide Deep Learning Models Towards Consistent Explanations for EHR Data

    Get PDF
    It has been shown that identical Deep Learning (DL) architectures will produce distinct explanations when trained with different hyperparameters that are orthogonal to the task (e.g. random seed, training set order). In domains such as healthcare and finance, where transparency and explainability is paramount, this can be a significant barrier to DL adoption. In this study we present a further analysis of explanation (in)consistency on 6 tabular datasets/tasks, with a focus on Electronic Health Records data. We propose a novel deep learning ensemble architecture that trains its sub-models to produce consistent explanations, improving explanation consistency by as much as 315% (e.g. from 0.02433 to 0.1011 on MIMIC-IV), and on average by 124% (e.g. from 0.12282 to 0.4450 on the BCW dataset). We evaluate the effectiveness of our proposed technique and discuss the implications our results have for both industrial applications of DL and explainability as well as future methodological work

    Vamsa: Automated Provenance Tracking in Data Science Scripts

    Full text link
    There has recently been a lot of ongoing research in the areas of fairness, bias and explainability of machine learning (ML) models due to the self-evident or regulatory requirements of various ML applications. We make the following observation: All of these approaches require a robust understanding of the relationship between ML models and the data used to train them. In this work, we introduce the ML provenance tracking problem: the fundamental idea is to automatically track which columns in a dataset have been used to derive the features/labels of an ML model. We discuss the challenges in capturing such information in the context of Python, the most common language used by data scientists. We then present Vamsa, a modular system that extracts provenance from Python scripts without requiring any changes to the users' code. Using 26K real data science scripts, we verify the effectiveness of Vamsa in terms of coverage, and performance. We also evaluate Vamsa's accuracy on a smaller subset of manually labeled data. Our analysis shows that Vamsa's precision and recall range from 90.4% to 99.1% and its latency is in the order of milliseconds for average size scripts. Drawing from our experience in deploying ML models in production, we also present an example in which Vamsa helps automatically identify models that are affected by data corruption issues

    Frameworks for data-driven quality management in cyber-physical systems for manufacturing: A systematic review

    Get PDF
    Recent advances in the manufacturing industry have enabled the deployment of Cyber-Physical Systems (CPS) at scale. By utilizing advanced analytics, data from production can be analyzed and used to monitor and improve the process and product quality. Many frameworks for implementing CPS have been developed to structure the relationship between the digital and the physical worlds. However, there is no systematic review of the existing frameworks related to quality management in manufacturing CPS. Thus, our study aims at determining and comparing the existing frameworks. The systematic review yielded 38 frameworks analyzed regarding their characteristics, use of data science and Machine Learning (ML), and shortcomings and open research issues. The identified issues mainly relate to limitations in cross-industry/cross-process applicability, the use of ML, big data handling, and data security.publishedVersio

    The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification

    Get PDF
    Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, built on weights calculated as a result of large matrix multiplications. It is typically hard to interpret and debug the computationally intensive processes. Explainable Artificial Intelligence (XAI) aims to solve black-box and hard-to-debug approaches through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes

    A Review of Bias and Fairness in Artificial Intelligence

    Get PDF
    Automating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently, explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and how these are associated with different phases of an AI model’s development (including the data-generation phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the addition of transversal actions that help to produce fairer models

    A Neutrosophic Clinical Decision-Making System for Cardiovascular Diseases Risk Analysis

    Get PDF
    Cardiovascular diseases are the leading cause of death worldwide. Early diagnosis of heart disease can reduce this large number of deaths so that treatment can be carried out. Many decision-making systems have been developed, but they are too complex for medical professionals. To target these objectives, we develop an explainable neutrosophic clinical decision-making system for the timely diagnose of cardiovascular disease risk. We make our system transparent and easy to understand with the help of explainable artificial intelligence techniques so that medical professionals can easily adopt this system. Our system is taking thirtyfive symptoms as input parameters, which are, gender, age, genetic disposition, smoking, blood pressure, cholesterol, diabetes, body mass index, depression, unhealthy diet, metabolic disorder, physical inactivity, pre-eclampsia, rheumatoid arthritis, coffee consumption, pregnancy, rubella, drugs, tobacco, alcohol, heart defect, previous surgery/injury, thyroid, sleep apnea, atrial fibrillation, heart history, infection, homocysteine level, pericardial cysts, marfan syndrome, syphilis, inflammation, clots, cancer, and electrolyte imbalance and finds out the risk of coronary artery disease, cardiomyopathy, congenital heart disease, heart attack, heart arrhythmia, peripheral artery disease, aortic disease, pericardial disease, deep vein thrombosis, heart valve disease, and heart failure. There are five main modules of the system, which are neutrosophication, knowledge base, inference engine, de-neutrosophication, and explainability. To demonstrate the complete working of our system, we design an algorithm and calculates its time complexity. We also present a new de-neutrosophication formula, and give comparison of our the results with existing methods

    Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability

    Full text link
    Artificial Intelligence (AI) is rapidly integrating into various aspects of our daily lives, influencing decision-making processes in areas such as targeted advertising and matchmaking algorithms. As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial. Functional transparency is a fundamental aspect of algorithmic decision-making systems, allowing stakeholders to comprehend the inner workings of these systems and enabling them to evaluate their fairness and accuracy. However, achieving functional transparency poses significant challenges that need to be addressed. In this paper, we propose a design for user-centered compliant-by-design transparency in transparent systems. We emphasize that the development of transparent and explainable AI systems is a complex and multidisciplinary endeavor, necessitating collaboration among researchers from diverse fields such as computer science, artificial intelligence, ethics, law, and social science. By providing a comprehensive understanding of the challenges associated with transparency in AI systems and proposing a user-centered design framework, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.Comment: Hosain, M. T. , Anik, M. H. , Rafi, S. , Tabassum, R. , Insia, K. & S{\i}dd{\i}ky, M. M. (). Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability . Journal of Metaverse , 3 (2) , 166-180 . DOI: 10.57019/jmv.130668
    corecore