22,979 research outputs found
Ensemble of Single‐Layered Complex‐Valued Neural Networks for Classification Tasks
This paper presents ensemble approaches in single-layered complex-valued
neural network (CVNN) to solve real-valued classification problems. Each
component CVNN of an ensemble uses a recently proposed activation function
for its complex-valued neurons (CVNs). A gradient-descent based learning
algorithm was used to train the component CVNNs. We applied two ensemble
methods, negative correlation learning and bagging, to create the ensembles.
Experimental results on a number of real-world benchmark problems showed a
substantial performance improvement of the ensembles over the individual
single-layered CVNN classifiers. Furthermore, the generalization performances
were nearly equivalent to those obtained by the ensembles of real-valued
multilayer neural networks
Ensemble deep learning: A review
Ensemble learning combines several individual models to obtain better
generalization performance. Currently, deep learning models with multilayer
processing architecture is showing better performance as compared to the
shallow or traditional classification models. Deep ensemble learning models
combine the advantages of both the deep learning models as well as the ensemble
learning such that the final model has better generalization performance. This
paper reviews the state-of-art deep ensemble models and hence serves as an
extensive summary for the researchers. The ensemble models are broadly
categorised into ensemble models like bagging, boosting and stacking, negative
correlation based deep ensemble models, explicit/implicit ensembles,
homogeneous /heterogeneous ensemble, decision fusion strategies, unsupervised,
semi-supervised, reinforcement learning and online/incremental, multilabel
based deep ensemble models. Application of deep ensemble models in different
domains is also briefly discussed. Finally, we conclude this paper with some
future recommendations and research directions
Vote-boosting ensembles
Vote-boosting is a sequential ensemble learning method in which the
individual classifiers are built on different weighted versions of the training
data. To build a new classifier, the weight of each training instance is
determined in terms of the degree of disagreement among the current ensemble
predictions for that instance. For low class-label noise levels, especially
when simple base learners are used, emphasis should be made on instances for
which the disagreement rate is high. When more flexible classifiers are used
and as the noise level increases, the emphasis on these uncertain instances
should be reduced. In fact, at sufficiently high levels of class-label noise,
the focus should be on instances on which the ensemble classifiers agree. The
optimal type of emphasis can be automatically determined using
cross-validation. An extensive empirical analysis using the beta distribution
as emphasis function illustrates that vote-boosting is an effective method to
generate ensembles that are both accurate and robust
Deep Architectures and Ensembles for Semantic Video Classification
This work addresses the problem of accurate semantic labelling of short
videos. To this end, a multitude of different deep nets, ranging from
traditional recurrent neural networks (LSTM, GRU), temporal agnostic networks
(FV,VLAD,BoW), fully connected neural networks mid-stage AV fusion and others.
Additionally, we also propose a residual architecture-based DNN for video
classification, with state-of-the art classification performance at
significantly reduced complexity. Furthermore, we propose four new approaches
to diversity-driven multi-net ensembling, one based on fast correlation measure
and three incorporating a DNN-based combiner. We show that significant
performance gains can be achieved by ensembling diverse nets and we investigate
factors contributing to high diversity. Based on the extensive YouTube8M
dataset, we provide an in-depth evaluation and analysis of their behaviour. We
show that the performance of the ensemble is state-of-the-art achieving the
highest accuracy on the YouTube-8M Kaggle test data. The performance of the
ensemble of classifiers was also evaluated on the HMDB51 and UCF101 datasets,
and show that the resulting method achieves comparable accuracy with
state-of-the-art methods using similar input features
Ensemble Learning for Free with Evolutionary Algorithms ?
Evolutionary Learning proceeds by evolving a population of classifiers, from
which it generally returns (with some notable exceptions) the single
best-of-run classifier as final result. In the meanwhile, Ensemble Learning,
one of the most efficient approaches in supervised Machine Learning for the
last decade, proceeds by building a population of diverse classifiers. Ensemble
Learning with Evolutionary Computation thus receives increasing attention. The
Evolutionary Ensemble Learning (EEL) approach presented in this paper features
two contributions. First, a new fitness function, inspired by co-evolution and
enforcing the classifier diversity, is presented. Further, a new selection
criterion based on the classification margin is proposed. This criterion is
used to extract the classifier ensemble from the final population only
(Off-line) or incrementally along evolution (On-line). Experiments on a set of
benchmark problems show that Off-line outperforms single-hypothesis
evolutionary learning and state-of-art Boosting and generates smaller
classifier ensembles
Bagging ensemble selection for regression
Bagging ensemble selection (BES) is a relatively new ensemble learning strategy. The strategy can be seen as an ensemble of the ensemble selection from libraries of models (ES) strategy. Previous experimental results on binary classification problems have shown that using random trees as base classifiers, BES-OOB (the most successful variant of BES) is competitive with (and in many cases, superior to) other ensemble learning strategies, for instance, the original ES algorithm, stacking with linear regression, random forests or boosting. Motivated by the promising results in classification, this paper examines the predictive performance of the BES-OOB strategy for regression problems. Our results show that the BES-OOB strategy outperforms Stochastic Gradient Boosting and Bagging when using regression trees as the base learners. Our results also suggest that the advantage of using a diverse model library becomes clear when the model library size is relatively large. We also present encouraging results indicating that the non negative least squares algorithm is a viable approach for pruning an ensemble of ensembles
- …