473 research outputs found

    Comparing the Performance Potentials of Singleton and Non-singleton Type-1 and Interval Type-2 Fuzzy Systems in Terms of Sculpting the State Space

    Get PDF
    This paper provides a novel and better understanding of the performance potential of a nonsingleton (NS) fuzzy system over a singleton (S) fuzzy system. It is done by extending sculpting the state space works from S to NS fuzzification and demonstrating uncertainties about measurements, modeled by NS fuzzification: first, fire more rules more often, manifested by a reduction (increase) in the sizes of first-order rule partitions for those partitions associated with the firing of a smaller (larger) number of rules—the coarse sculpting of the state space; second, this may lead to an increase or decrease in the number of type-1 (T1) and interval type-2 (IT2) first-order rule partitions, which now contain rule pairs that can never occur for S fuzzification—a new rule crossover phenomenon —discovered using partition theory; and third, it may lead to a decrease, the same number, or an increase in the number of second-order rule partitions, all of which are system dependent—the fine sculpting of the state space. The authors' conjecture is that it is the additional control of the coarse sculpting of the state space, accomplished by prefiltering and the max–min (or max-product) composition, which provides an NS T1 or IT2 fuzzy system with the potential to outperform an S T1 or IT2 system when measurements are uncertain

    Comparing Performance Potentials of Classical and Intuitionistic Fuzzy Systems in Terms of Sculpting the State Space

    Get PDF
    This paper provides new application-independent perspectives about the performance potential of an intuitionistic (I-) fuzzy system over a (classical) TSK fuzzy system. It does this by extending sculpting the state space works from a TSK fuzzy system to an I-fuzzy system. It demonstrates that, for piecewise-linear membership functions (trapezoids and triangles), an I-fuzzy system always has significantly more first-order rule partitions of the state space-the coarse sculpting of the state space-than does a TSK fuzzy system, and that some I-fuzzy systems also have more second-order rule partitions of the state space-the fine sculpting of the state space-than does a TSK fuzzy system. It is the author's conjecture that, for piecewise-linear membership functions (trapezoids and triangles): It is the always-significantly greater coarse (and possibly fine) sculpting of the state space that provides an I-fuzzy system with the potential to outperform a TSK fuzzy system; and, that a type-1 I-fuzzy system has the potential to outperform an interval type-2 fuzzy system. Index Terms-intuitionistic fuzzy sets, intuitionistic fuzzy systems, TSK fuzzy systems, rule partitions, sculpting the state space

    A Bibliometric Overview of the Field of Type-2 Fuzzy Sets and Systems [Discussion Forum]

    Get PDF
    © 2005-2012 IEEE. Fuzzy Sets and Systems is an area of computational intelligence, pioneered by Lotfi Zadeh over 50 years ago in a seminal paper in Information and Control. Fuzzy Sets (FSs) deal with uncertainty in our knowledge of a particular situation. Research and applications in FSs have grown steadily over 50 years. More recently, we have seen a growth in Type-2 Fuzzy Set (T2 FS) related papers, where T2 FSs are utilized to handle uncertainty in realworld problems. In this paper, we have used bibliometric methods to obtain a broad overview of the area of T2 FSs. This method analyzes information on the bibliographic details of published journal papers, which includes title, authors, author address, journals and citations, extracted from the Science and Social Science Citation Indices in the Web of Science (WoS) database for the last 20 years (1997-2017). We have compared the growth of publications in the field of FSs, and its subset T2 FSs, identified highly cited papers in T2 FSs, highly cited authors, key institutions, and main countries with researchers involved in T2 FS related research

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    An Explainable Artificial Intelligence Approach Based on Deep Type-2 Fuzzy Logic System

    Get PDF
    Artificial intelligence (AI) systems have benefitted from the easy availability of computing power and the rapid increase in the quantity and quality of data which has led to the widespread adoption of AI techniques across a wide variety of fields. However, the use of complex (or Black box) AI systems such as Deep Neural Networks, support vector machines, etc., could lead to a lack of transparency. This lack of transparency is not specific to deep learning or complex AI algorithms; other interpretable AI algorithms such as kernel machines, logistic regressions, decision trees, or rules-based algorithms can also become difficult to interpret for high dimensional inputs. The lack of transparency or explainability reduces the effectiveness of AI models in regulated applications (such as medical, financial, etc.), where it is essential to explain the model operation and how it arrived at a given prediction. The need for explainability in AI has led to a new line of research that focuses on developing Explainable AI techniques. There are three main avenues of research that are being explored to achieve explainability; first, Deep Explanations, which involves the modification of existing Deep learning models to add explainability. The methods proposed to do Deep explanations generally provide details about all the input features that affect the output, generally in a visual format as there might be a large number of features. This type of explanation is useful for tasks such as image recognition, but in other tasks, it might be hard to distinguish the most important features. Second, Model induction, which involves methods that are model agnostic, but these methods might not be suitable for use in regulated applications. The third method is to use existing interpretable models such as decision trees, fuzzy logic, etc., but the problem with them is that they can also become opaque for high dimensional data. Hence, this thesis presents a novel AI system by combining the predictive power of Deep Learning with the interpretability of Interval Type-2 Fuzzy Logic Systems. The advantages of such a system are, first, the ability to be trained via labelled and unlabelled data (i.e., mixing supervised and unsupervised learning). Second, having embedded feature selection abilities (i.e., can be trained by hundreds and thousands of inputs with no need for feature selection) while delivering explainable models with small rules bases composed of short rules to maximize the model’s interpretability. The proposed model was developed with data from British Telecom (BT). It achieved comparable performance to the deep models such as Stacked Autoencoder (SAE) and Convolution Neural Networks (CNN). In categorical datasets, the model outperformed the SAE by 2%, performed within 2-3% of the CNN and outperformed Multi-Layer Perceptron (MLP) and IT2FLS by 4%. In the regression datasets, the model performed slightly worse than the SAE, MLP and CNN models, but it outperformed the IT2FLS with a 15% lower error. The proposed model achieved excellent interpretability in a survey where it was rated within 2% of the highly interpretable IT2FLS. It was also rated 20% and 17% better than Deep learning XAI tools LIME and SHAP, respectively. The proposed model shows a small loss in performance for significantly higher interpretability, making it a suitable replacement for the other AI models in applications with many features where interpretability is paramount
    • …
    corecore