959 research outputs found
Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules
This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization
Analysis of the Correlation Between Majority Voting Error and the Diversity Measures in Multiple Classifier Systems
Combining classifiers by majority voting (MV) has
recently emerged as an effective way of improving
performance of individual classifiers. However, the
usefulness of applying MV is not always observed and
is subject to distribution of classification outputs in a
multiple classifier system (MCS). Evaluation of MV
errors (MVE) for all combinations of classifiers in MCS
is a complex process of exponential complexity.
Reduction of this complexity can be achieved provided
the explicit relationship between MVE and any other
less complex function operating on classifier outputs is
found. Diversity measures operating on binary
classification outputs (correct/incorrect) are studied in
this paper as potential candidates for such functions.
Their correlation with MVE, interpreted as the quality
of a measure, is thoroughly investigated using artificial
and real-world datasets. Moreover, we propose new
diversity measure efficiently exploiting information
coming from the whole MCS, rather than its part, for
which it is applied
Deep heterogeneous ensemble.
In recent years, deep neural networks (DNNs) have emerged as a powerful technique in many areas of machine learning. Although DNNs have achieved great breakthrough in processing images, video, audio and text, it also has some limitations such as needing a large number of labeled data for training and having a large number of parameters. Ensemble learning, meanwhile, provides a learning model by combining many different classifiers such that an ensemble of classifiers is better than using single classifier. In this study, we propose a deep ensemble framework called Deep Heterogeneous Ensemble (DHE) for supervised learning tasks. In each layer of our algorithm, the input data is passed through a feature selection method to remove irrelevant features and prevent overfitting. The cross-validation with K learning algorithms is applied to the selected data, in order to obtain the meta-data and the K base classifiers for the next layer. In this way, one layer will output the meta-data as the input data for the next layer, the base classifiers, and the indices of the selected meta-data. A combining algorithm is then applied on the meta-data of the last layer to obtain the final class prediction. Experiments on 30 datasets confirm that the proposed DHE is better than a number of well-known benchmark algorithms
Deep Architectures and Ensembles for Semantic Video Classification
This work addresses the problem of accurate semantic labelling of short
videos. To this end, a multitude of different deep nets, ranging from
traditional recurrent neural networks (LSTM, GRU), temporal agnostic networks
(FV,VLAD,BoW), fully connected neural networks mid-stage AV fusion and others.
Additionally, we also propose a residual architecture-based DNN for video
classification, with state-of-the art classification performance at
significantly reduced complexity. Furthermore, we propose four new approaches
to diversity-driven multi-net ensembling, one based on fast correlation measure
and three incorporating a DNN-based combiner. We show that significant
performance gains can be achieved by ensembling diverse nets and we investigate
factors contributing to high diversity. Based on the extensive YouTube8M
dataset, we provide an in-depth evaluation and analysis of their behaviour. We
show that the performance of the ensemble is state-of-the-art achieving the
highest accuracy on the YouTube-8M Kaggle test data. The performance of the
ensemble of classifiers was also evaluated on the HMDB51 and UCF101 datasets,
and show that the resulting method achieves comparable accuracy with
state-of-the-art methods using similar input features
Structured Radial Basis Function Network: Modelling Diversity for Multiple Hypotheses Prediction
Multi-modal regression is important in forecasting nonstationary processes or
with a complex mixture of distributions. It can be tackled with multiple
hypotheses frameworks but with the difficulty of combining them efficiently in
a learning model. A Structured Radial Basis Function Network is presented as an
ensemble of multiple hypotheses predictors for regression problems. The
predictors are regression models of any type that can form centroidal Voronoi
tessellations which are a function of their losses during training. It is
proved that this structured model can efficiently interpolate this tessellation
and approximate the multiple hypotheses target distribution and is equivalent
to interpolating the meta-loss of the predictors, the loss being a zero set of
the interpolation error. This model has a fixed-point iteration algorithm
between the predictors and the centers of the basis functions. Diversity in
learning can be controlled parametrically by truncating the tessellation
formation with the losses of individual predictors. A closed-form solution with
least-squares is presented, which to the authors knowledge, is the fastest
solution in the literature for multiple hypotheses and structured predictions.
Superior generalization performance and computational efficiency is achieved
using only two-layer neural networks as predictors controlling diversity as a
key component of success. A gradient-descent approach is introduced which is
loss-agnostic regarding the predictors. The expected value for the loss of the
structured model with Gaussian basis functions is computed, finding that
correlation between predictors is not an appropriate tool for diversification.
The experiments show outperformance with respect to the top competitors in the
literature.Comment: 63 Pages, 40 Figure
Incremental construction of classifier and discriminant ensembles
We discuss approaches to incrementally construct an ensemble. The first constructs an ensemble of classifiers choosing a subset from a larger set, and the second constructs an ensemble of discriminants, where a classifier is used for some classes only. We investigate criteria including accuracy, significant improvement, diversity, correlation, and the role of search direction. For discriminant ensembles, we test subset selection and trees. Fusion is by voting or by a linear model. Using 14 classifiers on 38 data sets. incremental search finds small, accurate ensembles in polynomial time. The discriminant ensemble uses a subset of discriminants and is simpler, interpretable, and accurate. We see that an incremental ensemble has higher accuracy than bagging and random subspace method; and it has a comparable accuracy to AdaBoost. but fewer classifiers.We would like to thank the three anonymous referees and the editor for their constructive comments, pointers to related literature, and pertinent questions which allowed us to better situate our work as well as organize the ms and improve the presentation. This work has been supported by the Turkish Academy of Sciences in the framework of the Young Scientist Award Program (EA-TUBA-GEBIP/2001-1-1), Bogazici University Scientific Research Project 05HA101 and Turkish Scientific Technical Research Council TUBITAK EEEAG 104EO79Publisher's VersionAuthor Pre-Prin
- …