155 research outputs found

    A comparative study of general fuzzy min-max neural networks for pattern classification problems

    Full text link
    © 2019 Elsevier B.V. General fuzzy min-max (GFMM) neural network is a generalization of fuzzy neural networks formed by hyperbox fuzzy sets for classification and clustering problems. Two principle algorithms are deployed to train this type of neural network, i.e., incremental learning and agglomerative learning. This paper presents a comprehensive empirical study of performance influencing factors, advantages, and drawbacks of the general fuzzy min-max neural network on pattern classification problems. The subjects of this study include (1) the impact of maximum hyperbox size, (2) the influence of the similarity threshold and measures on the agglomerative learning algorithm, (3) the effect of data presentation order, (4) comparative performance evaluation of the GFMM with other types of fuzzy min-max neural networks and prevalent machine learning algorithms. The experimental results on benchmark datasets widely used in machine learning showed overall strong and weak points of the GFMM classifier. These outcomes also informed potential research directions for this class of machine learning algorithms in the future

    Application Of The Fuzzy Min-Max Neural Networks To Medical Diagnosis.

    Get PDF
    Abstract. In this paper, the Fuzzy Min-Max (FMM) neural network along with two modified FMM models are used for tackling medical diagnostic problems. The original FMM network establishes hyperboxes with fuzzy sets in its structure for classifying input patterns into different output categories

    Individual And Ensemble Pattern Classification Models Using Enhanced Fuzzy Min-Max Neural Networks

    Get PDF
    Pattern classification is one of the major components for the design and development of a computerized pattern recognition system. Focused on computational intelligence models, this thesis describes in-depth investigations on two possible directions to design robust and flexible pattern classification models with high performance. Firstly is by enhancing the learning algorithm of a neural-fuzzy network; and secondly by devising an ensemble model to combine the predictions from multiple neural-fuzzy networks using an agent-based framework. Owing to a number of salient features which include the ability of learning incrementally and establishing nonlinear decision boundary with hyperboxes, the Fuzzy Min-Max (FMM) network is selected as the backbone for designing useful and usable pattern classification models in this research. Two enhanced FMM variants, i.e. EFMM and EFMM2, are proposed to address a number of limitations in the original FMM learning algorithm. In EFMM, three heuristic rules are introduced to improve the hyperbox expansion, overlap test, and contraction processes. The network complexity and noise tolerance issues are undertaken in EFMM2. In addition, an agent-based framework is capitalized as a robust ensemble model to house multiple EFMM-based networks. A useful trust measurement method known as Certified Belief in Strength (CBS) is developed and incorporated into the ensemble model for exploiting the predictive performances of different EFMM-based networks

    Categorical Missing Data Imputation Using Fuzzy Neural Networks with Numerical and Categorical Inputs

    Get PDF
    There are many situations where input feature vectors are incomplete and methods to tackle the problem have been studied for a long time. A commonly used procedure is to replace each missing value with an imputation. This paper presents a method to perform categorical missing data imputation from numerical and categorical variables. The imputations are based on Simpson’s fuzzy min-max neural networks where the input variables for learning and classification are just numerical. The proposed method extends the input to categorical variables by introducing new fuzzy sets, a new operation and a new architecture. The procedure is tested and compared with others using opinion poll data

    An improved online learning algorithm for general fuzzy min-max neural network

    Full text link
    This paper proposes an improved version of the current online learning algorithm for a general fuzzy min-max neural network (GFMM) to tackle existing issues concerning expansion and contraction steps as well as the way of dealing with unseen data located on decision boundaries. These drawbacks lower its classification performance, so an improved algorithm is proposed in this study to address the above limitations. The proposed approach does not use the contraction process for overlapping hyperboxes, which is more likely to increase the error rate as shown in the literature. The empirical results indicated the improvement in the classification accuracy and stability of the proposed method compared to the original version and other fuzzy min-max classifiers. In order to reduce the sensitivity to the training samples presentation order of this new on-line learning algorithm, a simple ensemble method is also proposed.Comment: 9 pages, 8 tables, 6 figure

    Fuzzy min-max neural networks for categorical data: application to missing data imputation

    Get PDF
    The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes

    Tuning of a fuzzy classifier derived from data

    Get PDF
    AbstractIn our previous work we developed a method for extracting fuzzy rules directly from numerical data for pattern classification. The performance of the fuzzy classifier developed using this methodology was comparable to the average performance of neural networks. In this paper, we further develop two methods, a least squares method and an iterative method, for tuning the sensitivity parameters of fuzzy membership functions by which the generalization ability of the classifier is improved. We evaluate our methods using the Fisher iris data and data for numeral recognition of vehicle license plates. The results show that when the tuned sensitivity parameters are applied, the recognition rates are improved to the extent that performance is comparable to or better than the maximum performance obtained by neural networks, but with shorter computational time
    corecore