2 research outputs found

    Privacy-Preserving Gesture Recognition with Explainable Type-2 Fuzzy Logic Based Systems

    Get PDF
    Smart homes are a growing market in need of privacy preserving sensors paired with explainable, interpretable and reliable control systems. The recent boom in Artificial Intelligence (AI) has seen an ever-growing persistence to incorporate it in all spheres of human life including the household. This growth in AI has been met with reciprocal concern for the privacy impacts and reluctance to introduce sensors, such as cameras, into homes. This concern has led to research of sensors not traditionally found in households, mainly short range radar. There has been also increasing awareness of AI transparency and explainability. Traditional AI black box models are not trusted, despite boasting high accuracy scores, due to the inability to understand what the decisions were based on. Interval Type-2 Fuzzy Logic offers a powerful alternative, achieving close to black box levels of performance while remaining completely interpretable. This paper presents a privacy preserving short range radar sensor coupled with an Explainable AI system employing a Big Bang Big Crunch (BB-BC) Interval Type-2 Fuzzy Logic System (FLS) to classify gestures performed in an indoor environment

    An Explainable Artificial Intelligence Approach Based on Deep Type-2 Fuzzy Logic System

    Get PDF
    Artificial intelligence (AI) systems have benefitted from the easy availability of computing power and the rapid increase in the quantity and quality of data which has led to the widespread adoption of AI techniques across a wide variety of fields. However, the use of complex (or Black box) AI systems such as Deep Neural Networks, support vector machines, etc., could lead to a lack of transparency. This lack of transparency is not specific to deep learning or complex AI algorithms; other interpretable AI algorithms such as kernel machines, logistic regressions, decision trees, or rules-based algorithms can also become difficult to interpret for high dimensional inputs. The lack of transparency or explainability reduces the effectiveness of AI models in regulated applications (such as medical, financial, etc.), where it is essential to explain the model operation and how it arrived at a given prediction. The need for explainability in AI has led to a new line of research that focuses on developing Explainable AI techniques. There are three main avenues of research that are being explored to achieve explainability; first, Deep Explanations, which involves the modification of existing Deep learning models to add explainability. The methods proposed to do Deep explanations generally provide details about all the input features that affect the output, generally in a visual format as there might be a large number of features. This type of explanation is useful for tasks such as image recognition, but in other tasks, it might be hard to distinguish the most important features. Second, Model induction, which involves methods that are model agnostic, but these methods might not be suitable for use in regulated applications. The third method is to use existing interpretable models such as decision trees, fuzzy logic, etc., but the problem with them is that they can also become opaque for high dimensional data. Hence, this thesis presents a novel AI system by combining the predictive power of Deep Learning with the interpretability of Interval Type-2 Fuzzy Logic Systems. The advantages of such a system are, first, the ability to be trained via labelled and unlabelled data (i.e., mixing supervised and unsupervised learning). Second, having embedded feature selection abilities (i.e., can be trained by hundreds and thousands of inputs with no need for feature selection) while delivering explainable models with small rules bases composed of short rules to maximize the model’s interpretability. The proposed model was developed with data from British Telecom (BT). It achieved comparable performance to the deep models such as Stacked Autoencoder (SAE) and Convolution Neural Networks (CNN). In categorical datasets, the model outperformed the SAE by 2%, performed within 2-3% of the CNN and outperformed Multi-Layer Perceptron (MLP) and IT2FLS by 4%. In the regression datasets, the model performed slightly worse than the SAE, MLP and CNN models, but it outperformed the IT2FLS with a 15% lower error. The proposed model achieved excellent interpretability in a survey where it was rated within 2% of the highly interpretable IT2FLS. It was also rated 20% and 17% better than Deep learning XAI tools LIME and SHAP, respectively. The proposed model shows a small loss in performance for significantly higher interpretability, making it a suitable replacement for the other AI models in applications with many features where interpretability is paramount
    corecore