3 research outputs found

    Fine-tuning the fuzziness of strong fuzzy partitions through PSO

    Get PDF
    We study the influence of fuzziness of trapezoidal fuzzy sets in the strong fuzzy partitions (SFPs) that constitute the database of a fuzzy rule-based classifier. To this end, we develop a particular representation of the trapezoidal fuzzy sets that is based on the concept of cuts, which are the cross-points of fuzzy sets in a SFP and fix the position of the fuzzy sets in the Universe of Discourse. In this way, it is possible to isolate the parameters that characterize the fuzziness of the fuzzy sets, which are subject to fine-tuning through particle swarm optimization (PSO). In this paper, we propose a formulation of the parameter space that enables the exploration of all possible levels of fuzziness in a SFP. The experimental results show that the impact of fuzziness is strongly dependent on the defuzzification procedure used in fuzzy rule-based classifiers. Fuzziness has little influence in the case of winner-takes-all defuzzification, while it is more influential in weighted sum defuzzification, which however may pose some interpretation problems

    An Explainable Artificial Intelligence Approach Based on Deep Type-2 Fuzzy Logic System

    Get PDF
    Artificial intelligence (AI) systems have benefitted from the easy availability of computing power and the rapid increase in the quantity and quality of data which has led to the widespread adoption of AI techniques across a wide variety of fields. However, the use of complex (or Black box) AI systems such as Deep Neural Networks, support vector machines, etc., could lead to a lack of transparency. This lack of transparency is not specific to deep learning or complex AI algorithms; other interpretable AI algorithms such as kernel machines, logistic regressions, decision trees, or rules-based algorithms can also become difficult to interpret for high dimensional inputs. The lack of transparency or explainability reduces the effectiveness of AI models in regulated applications (such as medical, financial, etc.), where it is essential to explain the model operation and how it arrived at a given prediction. The need for explainability in AI has led to a new line of research that focuses on developing Explainable AI techniques. There are three main avenues of research that are being explored to achieve explainability; first, Deep Explanations, which involves the modification of existing Deep learning models to add explainability. The methods proposed to do Deep explanations generally provide details about all the input features that affect the output, generally in a visual format as there might be a large number of features. This type of explanation is useful for tasks such as image recognition, but in other tasks, it might be hard to distinguish the most important features. Second, Model induction, which involves methods that are model agnostic, but these methods might not be suitable for use in regulated applications. The third method is to use existing interpretable models such as decision trees, fuzzy logic, etc., but the problem with them is that they can also become opaque for high dimensional data. Hence, this thesis presents a novel AI system by combining the predictive power of Deep Learning with the interpretability of Interval Type-2 Fuzzy Logic Systems. The advantages of such a system are, first, the ability to be trained via labelled and unlabelled data (i.e., mixing supervised and unsupervised learning). Second, having embedded feature selection abilities (i.e., can be trained by hundreds and thousands of inputs with no need for feature selection) while delivering explainable models with small rules bases composed of short rules to maximize the model’s interpretability. The proposed model was developed with data from British Telecom (BT). It achieved comparable performance to the deep models such as Stacked Autoencoder (SAE) and Convolution Neural Networks (CNN). In categorical datasets, the model outperformed the SAE by 2%, performed within 2-3% of the CNN and outperformed Multi-Layer Perceptron (MLP) and IT2FLS by 4%. In the regression datasets, the model performed slightly worse than the SAE, MLP and CNN models, but it outperformed the IT2FLS with a 15% lower error. The proposed model achieved excellent interpretability in a survey where it was rated within 2% of the highly interpretable IT2FLS. It was also rated 20% and 17% better than Deep learning XAI tools LIME and SHAP, respectively. The proposed model shows a small loss in performance for significantly higher interpretability, making it a suitable replacement for the other AI models in applications with many features where interpretability is paramount

    IMPROVING UNDERSTANDABILITY AND UNCERTAINTY MODELING OF DATA USING FUZZY LOGIC SYSTEMS

    Get PDF
    The need for automation, optimality and efficiency has made modern day control and monitoring systems extremely complex and data abundant. However, the complexity of the systems and the abundance of raw data has reduced the understandability and interpretability of data which results in a reduced state awareness of the system. Furthermore, different levels of uncertainty introduced by sensors and actuators make interpreting and accurately manipulating systems difficult. Classical mathematical methods lack the capability to capture human knowledge and increase understandability while modeling such uncertainty. Fuzzy Logic has been shown to alleviate both these problems by introducing logic based on vague terms that rely on human understandable terms. The use of linguistic terms and simple consequential rules increase the understandability of system behavior as well as data. Use of vague terms and modeling data from non-discrete prototypes enables modeling of uncertainty. However, due to recent trends, the primary research of fuzzy logic have been diverged from the basic concept of understandability. Furthermore, high computational costs to achieve robust uncertainty modeling have led to restricted use of such fuzzy systems in real-world applications. Thus, the goal of this dissertation is to present algorithms and techniques that improve understandability and uncertainty modeling using Fuzzy Logic Systems. In order to achieve this goal, this dissertation presents the following major contributions: 1) a novel methodology for generating Fuzzy Membership Functions based on understandability, 2) Linguistic Summarization of data using if-then type consequential rules, and 3) novel Shadowed Type-2 Fuzzy Logic Systems for uncertainty modeling. Finally, these presented techniques are applied to real world systems and data to exemplify their relevance and usage
    corecore