150,313 research outputs found
Dynamic Bayesian Combination of Multiple Imperfect Classifiers
Classifier combination methods need to make best use of the outputs of
multiple, imperfect classifiers to enable higher accuracy classifications. In
many situations, such as when human decisions need to be combined, the base
decisions can vary enormously in reliability. A Bayesian approach to such
uncertain combination allows us to infer the differences in performance between
individuals and to incorporate any available prior knowledge about their
abilities when training data is sparse. In this paper we explore Bayesian
classifier combination, using the computationally efficient framework of
variational Bayesian inference. We apply the approach to real data from a large
citizen science project, Galaxy Zoo Supernovae, and show that our method far
outperforms other established approaches to imperfect decision combination. We
go on to analyse the putative community structure of the decision makers, based
on their inferred decision making strategies, and show that natural groupings
are formed. Finally we present a dynamic Bayesian classifier combination
approach and investigate the changes in base classifier performance over time.Comment: 35 pages, 12 figure
Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability
In this paper a combination of neuro-fuzzy
classifiers for improved classification performance and reliability
is considered. A general fuzzy min-max (GFMM) classifier with
agglomerative learning algorithm is used as a main building
block. An alternative approach to combining individual classifier
decisions involving the combination at the classifier model level is
proposed. The resulting classifier complexity and transparency is
comparable with classifiers generated during a single crossvalidation
procedure while the improved classification
performance and reduced variance is comparable to the ensemble
of classifiers with combined (averaged/voted) decisions. We also
illustrate how combining at the model level can be used for
speeding up the training of GFMM classifiers for large data sets
Learning Hybrid Neuro-Fuzzy Classifier Models From Data: To Combine or Not to Combine?
To combine or not to combine? Though not a question of the same gravity as the Shakespeare’s to be or not
to be, it is examined in this paper in the context of a hybrid neuro-fuzzy pattern classifier design process. A general fuzzy
min-max neural network with its basic learning procedure is used within six different algorithm independent learning
schemes. Various versions of cross-validation, resampling techniques and data editing approaches, leading to a generation
of a single classifier or a multiple classifier system, are scrutinised and compared. The classification performance on
unseen data, commonly used as a criterion for comparing different competing designs, is augmented by further four
criteria attempting to capture various additional characteristics of classifier generation schemes. These include: the ability
to estimate the true classification error rate, the classifier transparency, the computational complexity of the learning
scheme and the potential for adaptation to changing environments and new classes of data. One of the main questions
examined is whether and when to use a single classifier or a combination of a number of component classifiers within a
multiple classifier system
CLASSIFIER COMBINATION IN SPEECH RECOGNITION
In statistical pattern recognition, the principal task is to
classify abstract data sets. Instead of using robust but
computational expensive algorithms it is possible to combine
`weak´ classifiers that can be employed in solving complex
classification tasks.
In this comparative study,
we will examine the effectiveness of the commonly used hybrid
schemes - especially those used for speech recognition problems -
concentrating on cases which employ different combinations of
classifiers
- …