1,587 research outputs found

    Type-Reduction of the Discretised Interval Type-2 Fuzzy Set: Approaching the Continuous Case through Progressively Finer Discretisation.

    Get PDF
    The defuzzification of a type-2 fuzzy set is a two stage process consisting of firstly type-reduction, and secondly defuzzification of the resultant type-1 set. This paper considers three approaches to discrete interval type-reduction: 1. The exhaustive method which produces the Type-Reduced Set, 2. the Greenfield-Chiclana Collapsing Defuzzifier which gives rise to the Representative Embedded Set Approximation, and 3. the Nie-Tan Method from which the Nie-Tan Set is derived. In the discrete case these three type-1 sets are distinct. The behavior of the three sets under fine discretisation is investigated experimentally, in order to shed light on the relationships between the continuous versions of these type-1 sets

    Slicing Strategies for the Generalised Type-2 Mamdani Fuzzy Inferencing System

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/[insert DOI]".As a three-dimensional object, there are a number of ways of slicing a generalised type-2 fuzzy set. In the context of the Mamdani Fuzzy Inferencing System, this paper concerns three accepted slicing strategies, the vertical slice, the wavy slice, and the horizontal slice or alpha -plane. Two ways of de ning the generalised type-2 fuzzy set, vertical slices and wavy slices, are presented. Fuzzi cation and inferencing is presented in terms of vertical slices. After that, the application of all three slicing strategies to defuzzi cation is described, and their strengths and weaknesses assessed

    Type-2 Fuzzy Logic: Circumventing the Defuzzification Bottleneck

    Get PDF
    Type-2 fuzzy inferencing for generalised, discretised type-2 fuzzy sets has been impeded by the computational complexity of the defuzzification stage of the fuzzy inferencing system. Indeed this stage is so complex computationally that it has come to be known as the defuzzification bottleneck. The computational complexity derives from the enormous number of embedded sets that have to be individually processed in order to effect defuzzification. Two new approaches to type-2 defuzzification are presented, the sampling method and the Greenfield-Chiclana Collapsing Defuzzifier. The sampling method and its variant, elite sampling, are techniques for the defuzzification of generalised type-2 fuzzy sets. In these methods a relatively small sample of the totality of embedded sets is randomly selected and processed. The small sample size drastically reduces the computational complexity of the defuzzification process, so that it may be speedily accomplished. The Greenfield-Chiclana Collapsing Defuzzifier relies upon the concept of the representative embedded set, which is an embedded set having the same defuzzified value as the type-2 fuzzy set that is to be defuzzified. By a process termed collapsing the type-2 fuzzy set is converted into a type-1 fuzzy set which, as an approximation to the representative embedded set, is known as the representative embedded set approximation. This type-1 fuzzy set is easily defuzzified to give the defuzzified value of the original type-2 fuzzy set. By this method the computational complexity of type-2 defuzzification is reduced enormously, since the representative embedded set approximation replaces the entire collection of embedded sets. The strategy was conceived as a generalised method, but so far only the interval version has been derived mathematically. The grid method of discretisation for type-2 fuzzy sets is also introduced in this thesis. Work on the defuzzification of type-2 fuzzy sets began around the turn of the millennium. Since that time a number of investigators have contributed methods in this area. These different approaches are surveyed, and the major methods implemented in code prior to their experimental evaluation. In these comparative experiments the grid method of defuzzification is employed. The experimental results show beyond doubt that the collapsing method performs the best of the interval alternatives. However, though the sampling method performs well experimentally, the results do not demonstrate it to be the best performing generalised technique

    Defuzzification of the Discretised Generalised Type-2 Fuzzy Set: Experimental Evaluation

    Get PDF
    CCI - Centre for Computational Intelligence NOTICE: this is the author’s version of a work that was accepted for publication in Information Science. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version can be found by following the DOIThe work reported in this paper addresses the challenge of the efficient and accurate defuzzification of discretised generalised type-2 fuzzy sets as created by the inference stage of a Mamdani Fuzzy Inferencing System. The exhaustive method of defuzzification for type-2 fuzzy sets is extremely slow, owing to its enormous computational complexity. Several approximate methods have been devised in response to this defuzzification bottleneck. In this paper we begin by surveying the main alternative strategies for defuzzifying a generalised type-2 fuzzy set: (1) Vertical Slice Centroid Type-Reduction; (2) the sampling method; (3) the elite sampling method; and (4) the α\alpha-planes method. We then evaluate the different methods experimentally for accuracy and efficiency. For accuracy the exhaustive method is used as the standard. The test results are analysed statistically by means of the Wilcoxon Nonparametric Test and the elite sampling method shown to be the most accurate. In regards to efficiency, Vertical Slice Centroid Type-Reduction is demonstrated to be the fastest technique

    On Nie-Tan operator and type-reduction of interval type-2 fuzzy sets

    Get PDF
    Type-reduction of type-2 fuzzy sets is considered to be a defuzzification bottleneck because of the computational complexity involved in the process of type-reduction. In this research, we prove that the closed-form Nie-Tan operator, which outputs the average of the upper and lower bounds of the footprint of uncertainty, is actually an accurate method for defuzzifing interval type-2 fuzzy sets

    Probing the fuzzy sphere regularisation in simulations of the 3d \lambda \phi^4 model

    Get PDF
    We regularise the 3d \lambda \phi^4 model by discretising the Euclidean time and representing the spatial part on a fuzzy sphere. The latter involves a truncated expansion of the field in spherical harmonics. This yields a numerically tractable formulation, which constitutes an unconventional alternative to the lattice. In contrast to the 2d version, the radius R plays an independent r\^{o}le. We explore the phase diagram in terms of R and the cutoff, as well as the parameters m^2 and \lambda. Thus we identify the phases of disorder, uniform order and non-uniform order. We compare the result to the phase diagrams of the 3d model on a non-commutative torus, and of the 2d model on a fuzzy sphere. Our data at strong coupling reproduce accurately the behaviour of a matrix chain, which corresponds to the c=1-model in string theory. This observation enables a conjecture about the thermodynamic limit.Comment: 31 pages, 15 figure

    A Comprehensive Study of the Efficiency of Type-Reduction Algorithms

    Get PDF
    Improving the efficiency of type-reduction algorithms continues to attract research interest. Recently, there have been some new type-reduction approaches claiming that they are more efficient than the well-known algorithms such as the enhanced Karnik-Mendel (EKM) and the enhanced iterative algorithm with stopping condition (EIASC). In a previous paper, we found that the computational efficiency of an algorithm is closely related to the platform, and how it is implemented. In computer science, the dependence on languages is usually avoided by focusing on the complexity of algorithms (using big O notation). In this paper, the main contribution is the proposal of two novel type-reduction algorithms. Also, for the first time, a comprehensive study on both existing and new type-reduction approaches is made based on both algorithm complexity and practical computational time under a variety of programming languages. Based on the results, suggestions are given for the preferred algorithms in different scenarios depending on implementation platform and application context
    corecore