15,540 research outputs found
General fuzzy min-max neural network for clustering and classification
This paper describes a general fuzzy min-max (GFMM) neural network which is a generalization and extension of the fuzzy min-max clustering and classification algorithms of Simpson (1992, 1993). The GFMM method combines supervised and unsupervised learning in a single training algorithm. The fusion of clustering and classification resulted in an algorithm that can be used as pure clustering, pure classification, or hybrid clustering classification. It exhibits a property of finding decision boundaries between classes while clustering patterns that cannot be said to belong to any of existing classes. Similarly to the original algorithms, the hyperbox fuzzy sets are used as a representation of clusters and classes. Learning is usually completed in a few passes and consists of placing and adjusting the hyperboxes in the pattern space; this is an expansion-contraction process. The classification results can be crisp or fuzzy. New data can be included without the need for retraining. While retaining all the interesting features of the original algorithms, a number of modifications to their definition have been made in order to accommodate fuzzy input patterns in the form of lower and upper bounds, combine the supervised and unsupervised learning, and improve the effectiveness of operations. A detailed account of the GFMM neural network, its comparison with the Simpson's fuzzy min-max neural networks, a set of examples, and an application to the leakage detection and identification in water distribution systems are given
A comparative study of general fuzzy min-max neural networks for pattern classification problems
© 2019 Elsevier B.V. General fuzzy min-max (GFMM) neural network is a generalization of fuzzy neural networks formed by hyperbox fuzzy sets for classification and clustering problems. Two principle algorithms are deployed to train this type of neural network, i.e., incremental learning and agglomerative learning. This paper presents a comprehensive empirical study of performance influencing factors, advantages, and drawbacks of the general fuzzy min-max neural network on pattern classification problems. The subjects of this study include (1) the impact of maximum hyperbox size, (2) the influence of the similarity threshold and measures on the agglomerative learning algorithm, (3) the effect of data presentation order, (4) comparative performance evaluation of the GFMM with other types of fuzzy min-max neural networks and prevalent machine learning algorithms. The experimental results on benchmark datasets widely used in machine learning showed overall strong and weak points of the GFMM classifier. These outcomes also informed potential research directions for this class of machine learning algorithms in the future
Agglomerative Learning for General Fuzzy Min-Max Neural Network
In this paper an agglomerative learning algorithm based on similarity measures defined for hyperbox fuzzy sets is proposed. It is presented in a context of clustering and classification problems that are tackled using a general fuzzy min-max (GFMM) neural network. The agglomerative scheme's robust behaviour in the presence of noise and outliers and its insensitivity to the order of the training pattern presentation are used as a complementary features to an incremental learning scheme, making it more suitable for online adaptation and dealing with large training data set
A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications
This survey samples from the ever-growing family of adaptive resonance theory
(ART) neural network models used to perform the three primary machine learning
modalities, namely, unsupervised, supervised and reinforcement learning. It
comprises a representative list from classic to modern ART models, thereby
painting a general picture of the architectures developed by researchers over
the past 30 years. The learning dynamics of these ART models are briefly
described, and their distinctive characteristics such as code representation,
long-term memory and corresponding geometric interpretation are discussed.
Useful engineering properties of ART (speed, configurability, explainability,
parallelization and hardware implementation) are examined along with current
challenges. Finally, a compilation of online software libraries is provided. It
is expected that this overview will be helpful to new and seasoned ART
researchers
Learning Hybrid Neuro-Fuzzy Classifier Models From Data: To Combine or Not to Combine?
To combine or not to combine? Though not a question of the same gravity as the Shakespeare’s to be or not
to be, it is examined in this paper in the context of a hybrid neuro-fuzzy pattern classifier design process. A general fuzzy
min-max neural network with its basic learning procedure is used within six different algorithm independent learning
schemes. Various versions of cross-validation, resampling techniques and data editing approaches, leading to a generation
of a single classifier or a multiple classifier system, are scrutinised and compared. The classification performance on
unseen data, commonly used as a criterion for comparing different competing designs, is augmented by further four
criteria attempting to capture various additional characteristics of classifier generation schemes. These include: the ability
to estimate the true classification error rate, the classifier transparency, the computational complexity of the learning
scheme and the potential for adaptation to changing environments and new classes of data. One of the main questions
examined is whether and when to use a single classifier or a combination of a number of component classifiers within a
multiple classifier system
Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability
In this paper a combination of neuro-fuzzy
classifiers for improved classification performance and reliability
is considered. A general fuzzy min-max (GFMM) classifier with
agglomerative learning algorithm is used as a main building
block. An alternative approach to combining individual classifier
decisions involving the combination at the classifier model level is
proposed. The resulting classifier complexity and transparency is
comparable with classifiers generated during a single crossvalidation
procedure while the improved classification
performance and reduced variance is comparable to the ensemble
of classifiers with combined (averaged/voted) decisions. We also
illustrate how combining at the model level can be used for
speeding up the training of GFMM classifiers for large data sets
- …