32 research outputs found

    An Explainable Artificial Intelligence Approach Based on Deep Type-2 Fuzzy Logic System

    Get PDF
    Artificial intelligence (AI) systems have benefitted from the easy availability of computing power and the rapid increase in the quantity and quality of data which has led to the widespread adoption of AI techniques across a wide variety of fields. However, the use of complex (or Black box) AI systems such as Deep Neural Networks, support vector machines, etc., could lead to a lack of transparency. This lack of transparency is not specific to deep learning or complex AI algorithms; other interpretable AI algorithms such as kernel machines, logistic regressions, decision trees, or rules-based algorithms can also become difficult to interpret for high dimensional inputs. The lack of transparency or explainability reduces the effectiveness of AI models in regulated applications (such as medical, financial, etc.), where it is essential to explain the model operation and how it arrived at a given prediction. The need for explainability in AI has led to a new line of research that focuses on developing Explainable AI techniques. There are three main avenues of research that are being explored to achieve explainability; first, Deep Explanations, which involves the modification of existing Deep learning models to add explainability. The methods proposed to do Deep explanations generally provide details about all the input features that affect the output, generally in a visual format as there might be a large number of features. This type of explanation is useful for tasks such as image recognition, but in other tasks, it might be hard to distinguish the most important features. Second, Model induction, which involves methods that are model agnostic, but these methods might not be suitable for use in regulated applications. The third method is to use existing interpretable models such as decision trees, fuzzy logic, etc., but the problem with them is that they can also become opaque for high dimensional data. Hence, this thesis presents a novel AI system by combining the predictive power of Deep Learning with the interpretability of Interval Type-2 Fuzzy Logic Systems. The advantages of such a system are, first, the ability to be trained via labelled and unlabelled data (i.e., mixing supervised and unsupervised learning). Second, having embedded feature selection abilities (i.e., can be trained by hundreds and thousands of inputs with no need for feature selection) while delivering explainable models with small rules bases composed of short rules to maximize the model’s interpretability. The proposed model was developed with data from British Telecom (BT). It achieved comparable performance to the deep models such as Stacked Autoencoder (SAE) and Convolution Neural Networks (CNN). In categorical datasets, the model outperformed the SAE by 2%, performed within 2-3% of the CNN and outperformed Multi-Layer Perceptron (MLP) and IT2FLS by 4%. In the regression datasets, the model performed slightly worse than the SAE, MLP and CNN models, but it outperformed the IT2FLS with a 15% lower error. The proposed model achieved excellent interpretability in a survey where it was rated within 2% of the highly interpretable IT2FLS. It was also rated 20% and 17% better than Deep learning XAI tools LIME and SHAP, respectively. The proposed model shows a small loss in performance for significantly higher interpretability, making it a suitable replacement for the other AI models in applications with many features where interpretability is paramount

    Privacy-Preserving Gesture Recognition with Explainable Type-2 Fuzzy Logic Based Systems

    Get PDF
    Smart homes are a growing market in need of privacy preserving sensors paired with explainable, interpretable and reliable control systems. The recent boom in Artificial Intelligence (AI) has seen an ever-growing persistence to incorporate it in all spheres of human life including the household. This growth in AI has been met with reciprocal concern for the privacy impacts and reluctance to introduce sensors, such as cameras, into homes. This concern has led to research of sensors not traditionally found in households, mainly short range radar. There has been also increasing awareness of AI transparency and explainability. Traditional AI black box models are not trusted, despite boasting high accuracy scores, due to the inability to understand what the decisions were based on. Interval Type-2 Fuzzy Logic offers a powerful alternative, achieving close to black box levels of performance while remaining completely interpretable. This paper presents a privacy preserving short range radar sensor coupled with an Explainable AI system employing a Big Bang Big Crunch (BB-BC) Interval Type-2 Fuzzy Logic System (FLS) to classify gestures performed in an indoor environment

    A Type-2 Fuzzy Based Explainable AI System for Predictive Maintenance within the Water Pumping Industry

    Get PDF
    Industrial maintenance has undergone a paradigm shift due to the emergence of artificial intelligence (AI), the Internet of Things (IoT), and cloud computing. Rather than accepting the drawbacks of reactive maintenance, leading firms worldwide are embracing "predict-and-prevent" maintenance. However, opaque box AI models are sophisticated and complex for the average user to comprehend and explain. This limits the AI employment in predictive maintenance, where it is vital to understand and evaluate the model before deployment. In addition, it's also important to comprehend the maintenance system's decisions. This paper presents a type-2 fuzzy-based Explainable AI (XAI) system for predictive maintenance within the water pumping industry. The proposed system is optimised via Big-Bang Big-Crunch (BB-BC), which maximises the model accuracy for predicting faults while maximising model interpretability. We evaluated the proposed system on water pumps using real-time data obtained by our hardware placed at real-world locations around the United Kingdom and compared our model with Type-1 Fuzzy Logic System (T1FLS), a Multi-Layer Perceptron (MLP) Neural Network, an effective deep learning method known as stacked autoencoders (SAEs) and an interpretable model like decision trees (DT). The proposed system predicted water pumping equipment failures with good accuracy (outperforming the T1FLS accuracy by 8.9% and DT by 529.2% while providing comparable results to SAEs and MLPs) and interpretability. The system predictions comprehend why a specific problem may occur, which leads to better and more informed customer visits to reduce equipment failure disturbances. It will be shown that 80.3% of water industry specialists strongly agree with the model's explanation, determining its acceptance

    Proceedings of the 1st Doctoral Consortium at the European Conference on Artificial Intelligence (DC-ECAI 2020)

    Get PDF
    1st Doctoral Consortium at the European Conference on Artificial Intelligence (DC-ECAI 2020), 29-30 August, 2020 Santiago de Compostela, SpainThe DC-ECAI 2020 provides a unique opportunity for PhD students, who are close to finishing their doctorate research, to interact with experienced researchers in the field. Senior members of the community are assigned as mentors for each group of students based on the student’s research or similarity of research interests. The DC-ECAI 2020, which is held virtually this year, allows students from all over the world to present their research and discuss their ongoing research and career plans with their mentor, to do networking with other participants, and to receive training and mentoring about career planning and career option

    Explainable clinical decision support system: opening black-box meta-learner algorithm expert's based

    Get PDF
    Mathematical optimization methods are the basic mathematical tools of all artificial intelligence theory. In the field of machine learning and deep learning the examples with which algorithms learn (training data) are used by sophisticated cost functions which can have solutions in closed form or through approximations. The interpretability of the models used and the relative transparency, opposed to the opacity of the black-boxes, is related to how the algorithm learns and this occurs through the optimization and minimization of the errors that the machine makes in the learning process. In particular in the present work is introduced a new method for the determination of the weights in an ensemble model, supervised and unsupervised, based on the well known Analytic Hierarchy Process method (AHP). This method is based on the concept that behind the choice of different and possible algorithms to be used in a machine learning problem, there is an expert who controls the decisionmaking process. The expert assigns a complexity score to each algorithm (based on the concept of complexity-interpretability trade-off) through which the weight with which each model contributes to the training and prediction phase is determined. In addition, different methods are presented to evaluate the performance of these algorithms and explain how each feature in the model contributes to the prediction of the outputs. The interpretability techniques used in machine learning are also combined with the method introduced based on AHP in the context of clinical decision support systems in order to make the algorithms (black-box) and the results interpretable and explainable, so that clinical-decision-makers can take controlled decisions together with the concept of "right to explanation" introduced by the legislator, because the decision-makers have a civil and legal responsibility of their choices in the clinical field based on systems that make use of artificial intelligence. No less, the central point is the interaction between the expert who controls the algorithm construction process and the domain expert, in this case the clinical one. Three applications on real data are implemented with the methods known in the literature and with those proposed in this work: one application concerns cervical cancer, another the problem related to diabetes and the last one focuses on a specific pathology developed by HIV-infected individuals. All applications are supported by plots, tables and explanations of the results, implemented through Python libraries. The main case study of this thesis regarding HIV-infected individuals concerns an unsupervised ensemble-type problem, in which a series of clustering algorithms are used on a set of features and which in turn produce an output used again as a set of meta-features to provide a set of labels for each given cluster. The meta-features and labels obtained by choosing the best algorithm are used to train a Logistic regression meta-learner, which in turn is used through some explainability methods to provide the value of the contribution that each algorithm has had in the training phase. The use of Logistic regression as a meta-learner classifier is motivated by the fact that it provides appreciable results and also because of the easy explainability of the estimated coefficients

    Tree-Based Classifier Ensembles for PE Malware Analysis: A Performance Revisit

    Get PDF
    Given their escalating number and variety, combating malware is becoming increasingly strenuous. Machine learning techniques are often used in the literature to automatically discover the models and patterns behind such challenges and create solutions that can maintain the rapid pace at which malware evolves. This article compares various tree-based ensemble learning methods that have been proposed in the analysis of PE malware. A tree-based ensemble is an unconventional learning paradigm that constructs and combines a collection of base learners (e.g., decision trees), as opposed to the conventional learning paradigm, which aims to construct individual learners from training data. Several tree-based ensemble techniques, such as random forest, XGBoost, CatBoost, GBM, and LightGBM, are taken into consideration and are appraised using different performance measures, such as accuracy, MCC, precision, recall, AUC, and F1. In addition, the experiment includes many public datasets, such as BODMAS, Kaggle, and CIC-MalMem-2022, to demonstrate the generalizability of the classifiers in a variety of contexts. Based on the test findings, all tree-based ensembles performed well, and performance differences between algorithms are not statistically significant, particularly when their respective hyperparameters are appropriately configured. The proposed tree-based ensemble techniques also outperformed other, similar PE malware detectors that have been published in recent years

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries

    Get PDF
    S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.Zlepšení průmyslových procesů, Model založený na datech, Optimalizace procesu, Strojové učení, Průmyslové systémy, Energeticky náročná průmyslová odvětví, Umělá inteligence.

    Advances in Computational Intelligence Applications in the Mining Industry

    Get PDF
    This book captures advancements in the applications of computational intelligence (artificial intelligence, machine learning, etc.) to problems in the mineral and mining industries. The papers present the state of the art in four broad categories: mine operations, mine planning, mine safety, and advances in the sciences, primarily in image processing applications. Authors in the book include both researchers and industry practitioners
    corecore