2,660 research outputs found

    Toward Human-Understandable, Explainable AI

    Get PDF
    Recent increases in computing power, coupled with rapid growth in the availability and quantity of data have rekindled our interest in the theory and applications of artificial intelligence (AI). However, for AI to be confidently rolled out by industries and governments, users want greater transparency through explainable AI (XAI) systems. The author introduces XAI concepts, and gives an overview of areas in need of further exploration-such as type-2 fuzzy logic systems-to ensure such systems can be fully understood and analyzed by the lay user

    Privacy-Preserving Gesture Recognition with Explainable Type-2 Fuzzy Logic Based Systems

    Get PDF
    Smart homes are a growing market in need of privacy preserving sensors paired with explainable, interpretable and reliable control systems. The recent boom in Artificial Intelligence (AI) has seen an ever-growing persistence to incorporate it in all spheres of human life including the household. This growth in AI has been met with reciprocal concern for the privacy impacts and reluctance to introduce sensors, such as cameras, into homes. This concern has led to research of sensors not traditionally found in households, mainly short range radar. There has been also increasing awareness of AI transparency and explainability. Traditional AI black box models are not trusted, despite boasting high accuracy scores, due to the inability to understand what the decisions were based on. Interval Type-2 Fuzzy Logic offers a powerful alternative, achieving close to black box levels of performance while remaining completely interpretable. This paper presents a privacy preserving short range radar sensor coupled with an Explainable AI system employing a Big Bang Big Crunch (BB-BC) Interval Type-2 Fuzzy Logic System (FLS) to classify gestures performed in an indoor environment

    Towards a Fuzzy Context Logic

    Get PDF
    A key step towards trustworthy, reliable and explainable, AI is bridging the gap between the quantitative domain of sensor-actuator systems and the qualitative domain of intelligent systems reasoning. Fuzzy logic is a well-known formalism suitable for aiming at this gap, featuring a quantitative mechanism that at the same time adheres to logical principles. Context logic is a two-layered logical language originally aimed at pervasive computing systems for reasoning about and within context, i.e., changing logical environments. Both logical languages are linguistically motivated. This chapter uncovers the close connection between the two logical languages presenting two new results. First, a proof is presented that context logic with a lattice semantics can be understood as an extension of fuzzy logic. Second, a fuzzification for context logic is proposed. The resulting language, which can be understood as a two-layered fuzzy logic or as a fuzzified context logic, expands both fields in a novel manner

    Explaining computer predictions with augmented appraisal degrees

    Get PDF
    An augmented appraisal degree (AAD) has been conceived as a mathematical representation of the connotative meaning in an experience-based evaluation, which depends on a particular experience or knowledge. Aiming to improve the interpretability of computer predictions, we explore the use of AADs to represent evaluations that are per- formed by a machine to predict the class of a particular object. Hence, we propose a novel method whereby predictions made using a support vector machine classification process are augmented through AADs. An illustra- tive example, in which the classes of handwritten digits are predicted, shows how the augmentation of such predictions can favor their interpretability

    Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks

    Full text link
    Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy Choquet integral (ChI), a powerful nonlinear aggregation function, can be represented as a multi-layer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy and our previously established XAI indices shed light on the quality of our data, model, and its decisions.Comment: IEEE Transactions on Fuzzy System

    Design and development of a fuzzy explainable expert system for a diagnostic robot of COVID-19

    Get PDF
    Expert systems have been widely used in medicine to diagnose different diseases. However, these rule-based systems only explain why and how their outcomes are reached. The rules leading to those outcomes are also expressed in a machine language and confronted with the familiar problems of coverage and specificity. This fact prevents procuring expert systems with fully human-understandable explanations. Furthermore, early diagnosis involves a high degree of uncertainty and vagueness which constitutes another challenge to overcome in this study. This paper aims to design and develop a fuzzy explainable expert system for coronavirus disease-2019 (COVID-19) diagnosis that could be incorporated into medical robots. The proposed medical robotic application deduces the likelihood level of contracting COVID-19 from the entered symptoms, the personal information, and the patient's activities. The proposal integrates fuzzy logic to deal with uncertainty and vagueness in diagnosis. Besides, it adopts a hybrid explainable artificial intelligence (XAI) technique to provide different explanation forms. In particular, the textual explanations are generated as rules expressed in a natural language while avoiding coverage and specificity problems. Therefore, the proposal could help overwhelmed hospitals during the epidemic propagation and avoid contamination using a solution with a high level of explicability

    Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives

    Get PDF
    Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future

    A Neutrosophic Clinical Decision-Making System for Cardiovascular Diseases Risk Analysis

    Get PDF
    Cardiovascular diseases are the leading cause of death worldwide. Early diagnosis of heart disease can reduce this large number of deaths so that treatment can be carried out. Many decision-making systems have been developed, but they are too complex for medical professionals. To target these objectives, we develop an explainable neutrosophic clinical decision-making system for the timely diagnose of cardiovascular disease risk. We make our system transparent and easy to understand with the help of explainable artificial intelligence techniques so that medical professionals can easily adopt this system. Our system is taking thirtyfive symptoms as input parameters, which are, gender, age, genetic disposition, smoking, blood pressure, cholesterol, diabetes, body mass index, depression, unhealthy diet, metabolic disorder, physical inactivity, pre-eclampsia, rheumatoid arthritis, coffee consumption, pregnancy, rubella, drugs, tobacco, alcohol, heart defect, previous surgery/injury, thyroid, sleep apnea, atrial fibrillation, heart history, infection, homocysteine level, pericardial cysts, marfan syndrome, syphilis, inflammation, clots, cancer, and electrolyte imbalance and finds out the risk of coronary artery disease, cardiomyopathy, congenital heart disease, heart attack, heart arrhythmia, peripheral artery disease, aortic disease, pericardial disease, deep vein thrombosis, heart valve disease, and heart failure. There are five main modules of the system, which are neutrosophication, knowledge base, inference engine, de-neutrosophication, and explainability. To demonstrate the complete working of our system, we design an algorithm and calculates its time complexity. We also present a new de-neutrosophication formula, and give comparison of our the results with existing methods
    • …
    corecore