14 research outputs found
Learning Hybrid Neuro-Fuzzy Classifier Models From Data: To Combine or Not to Combine?
To combine or not to combine? Though not a question of the same gravity as the Shakespeare’s to be or not
to be, it is examined in this paper in the context of a hybrid neuro-fuzzy pattern classifier design process. A general fuzzy
min-max neural network with its basic learning procedure is used within six different algorithm independent learning
schemes. Various versions of cross-validation, resampling techniques and data editing approaches, leading to a generation
of a single classifier or a multiple classifier system, are scrutinised and compared. The classification performance on
unseen data, commonly used as a criterion for comparing different competing designs, is augmented by further four
criteria attempting to capture various additional characteristics of classifier generation schemes. These include: the ability
to estimate the true classification error rate, the classifier transparency, the computational complexity of the learning
scheme and the potential for adaptation to changing environments and new classes of data. One of the main questions
examined is whether and when to use a single classifier or a combination of a number of component classifiers within a
multiple classifier system
Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability
In this paper a combination of neuro-fuzzy
classifiers for improved classification performance and reliability
is considered. A general fuzzy min-max (GFMM) classifier with
agglomerative learning algorithm is used as a main building
block. An alternative approach to combining individual classifier
decisions involving the combination at the classifier model level is
proposed. The resulting classifier complexity and transparency is
comparable with classifiers generated during a single crossvalidation
procedure while the improved classification
performance and reduced variance is comparable to the ensemble
of classifiers with combined (averaged/voted) decisions. We also
illustrate how combining at the model level can be used for
speeding up the training of GFMM classifiers for large data sets
Nature-Inspired Adaptive Architecture for Soft Sensor Modelling
This paper gives a general overview of the challenges present in the research field of Soft Sensor
building and proposes a novel architecture for building of Soft Sensors, which copes with the identified challenges. The
architecture is inspired and making use of nature-related techniques for computational intelligence. Another aspect,
which is addressed by the proposed architecture, are the identified characteristics of the process industry data. The data
recorded in the process industry consist usually of certain amount of missing values or sample exceeding meaningful
values of the measurements, called data outliers. Other process industry data properties causing problems for the
modelling are the collinearity of the data, drifting data and the different sampling rates of the particular hardware
sensors. It is these characteristics which are the source of the need for an adaptive behaviour of Soft Sensors. The
architecture reflects this need and provides mechanisms for the adaptation and evolution of the Soft Sensor at different
levels. The adaptation capabilities are provided by maintaining a variety of rather simple models. These particular
models, called paths in terms of the architecture, can for example focus on different partition of the input data space, or
provide different adaptation speeds to changes in the data. The actual modelling techniques involved into the
architecture are data-driven computational learning approaches like artificial neural networks, principal component
regression, etc
An improved online learning algorithm for general fuzzy min-max neural network
This paper proposes an improved version of the current online learning
algorithm for a general fuzzy min-max neural network (GFMM) to tackle existing
issues concerning expansion and contraction steps as well as the way of dealing
with unseen data located on decision boundaries. These drawbacks lower its
classification performance, so an improved algorithm is proposed in this study
to address the above limitations. The proposed approach does not use the
contraction process for overlapping hyperboxes, which is more likely to
increase the error rate as shown in the literature. The empirical results
indicated the improvement in the classification accuracy and stability of the
proposed method compared to the original version and other fuzzy min-max
classifiers. In order to reduce the sensitivity to the training samples
presentation order of this new on-line learning algorithm, a simple ensemble
method is also proposed.Comment: 9 pages, 8 tables, 6 figure
Fuzzy min-max neural networks for categorical data: application to missing data imputation
The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes
An Effective Multi-Resolution Hierarchical Granular Representation based Classifier using General Fuzzy Min-Max Neural Network
IEEE Motivated by the practical demands for simplification of data towards being consistent with human thinking and problem solving as well as tolerance of uncertainty, information granules are becoming important entities in data processing at different levels of data abstraction. This paper proposes a method to construct classifiers from multi-resolution hierarchical granular representations (MRHGRC) using hyperbox fuzzy sets. The proposed approach forms a series of granular inferences hierarchically through many levels of abstraction. An attractive characteristic of our classifier is that it can maintain a high accuracy in comparison to other fuzzy min-max models at a low degree of granularity based on reusing the knowledge learned from lower levels of abstraction. In addition, our approach can reduce the data size significantly as well as handle the uncertainty and incompleteness associated with data in real-world applications. The construction process of the classifier consists of two phases. The first phase is to formulate the model at the greatest level of granularity, while the later stage aims to reduce the complexity of the constructed model and deduce it from data at higher abstraction levels. Experimental analyses conducted comprehensively on both synthetic and real datasets indicated the efficiency of our method in terms of training time and predictive performance in comparison to other types of fuzzy min-max neural networks and common machine learning algorithms
Random Hyperboxes
This paper proposes a simple yet powerful ensemble classifier, called Random
Hyperboxes, constructed from individual hyperbox-based classifiers trained on
the random subsets of sample and feature spaces of the training set. We also
show a generalization error bound of the proposed classifier based on the
strength of the individual hyperbox-based classifiers as well as the
correlation among them. The effectiveness of the proposed classifier is
analyzed using a carefully selected illustrative example and compared
empirically with other popular single and ensemble classifiers via 20 datasets
using statistical testing methods. The experimental results confirmed that our
proposed method outperformed other fuzzy min-max neural networks, popular
learning algorithms, and is competitive with other ensemble methods. Finally,
we identify the existing issues related to the generalization error bounds of
the real datasets and inform the potential research directions