2,436 research outputs found
A Comparative Analysis of Ensemble Classifiers: Case Studies in Genomics
The combination of multiple classifiers using ensemble methods is
increasingly important for making progress in a variety of difficult prediction
problems. We present a comparative analysis of several ensemble methods through
two case studies in genomics, namely the prediction of genetic interactions and
protein functions, to demonstrate their efficacy on real-world datasets and
draw useful conclusions about their behavior. These methods include simple
aggregation, meta-learning, cluster-based meta-learning, and ensemble selection
using heterogeneous classifiers trained on resampled data to improve the
diversity of their predictions. We present a detailed analysis of these methods
across 4 genomics datasets and find the best of these methods offer
statistically significant improvements over the state of the art in their
respective domains. In addition, we establish a novel connection between
ensemble selection and meta-learning, demonstrating how both of these disparate
methods establish a balance between ensemble diversity and performance.Comment: 10 pages, 3 figures, 8 tables, to appear in Proceedings of the 2013
International Conference on Data Minin
Analysis of the Correlation Between Majority Voting Error and the Diversity Measures in Multiple Classifier Systems
Combining classifiers by majority voting (MV) has
recently emerged as an effective way of improving
performance of individual classifiers. However, the
usefulness of applying MV is not always observed and
is subject to distribution of classification outputs in a
multiple classifier system (MCS). Evaluation of MV
errors (MVE) for all combinations of classifiers in MCS
is a complex process of exponential complexity.
Reduction of this complexity can be achieved provided
the explicit relationship between MVE and any other
less complex function operating on classifier outputs is
found. Diversity measures operating on binary
classification outputs (correct/incorrect) are studied in
this paper as potential candidates for such functions.
Their correlation with MVE, interpreted as the quality
of a measure, is thoroughly investigated using artificial
and real-world datasets. Moreover, we propose new
diversity measure efficiently exploiting information
coming from the whole MCS, rather than its part, for
which it is applied
Hierarchical Multi-resolution Mesh Networks for Brain Decoding
We propose a new framework, called Hierarchical Multi-resolution Mesh
Networks (HMMNs), which establishes a set of brain networks at multiple time
resolutions of fMRI signal to represent the underlying cognitive process. The
suggested framework, first, decomposes the fMRI signal into various frequency
subbands using wavelet transforms. Then, a brain network, called mesh network,
is formed at each subband by ensembling a set of local meshes. The locality
around each anatomic region is defined with respect to a neighborhood system
based on functional connectivity. The arc weights of a mesh are estimated by
ridge regression formed among the average region time series. In the final
step, the adjacency matrices of mesh networks obtained at different subbands
are ensembled for brain decoding under a hierarchical learning architecture,
called, fuzzy stacked generalization (FSG). Our results on Human Connectome
Project task-fMRI dataset reflect that the suggested HMMN model can
successfully discriminate tasks by extracting complementary information
obtained from mesh arc weights of multiple subbands. We study the topological
properties of the mesh networks at different resolutions using the network
measures, namely, node degree, node strength, betweenness centrality and global
efficiency; and investigate the connectivity of anatomic regions, during a
cognitive task. We observe significant variations among the network topologies
obtained for different subbands. We, also, analyze the diversity properties of
classifier ensemble, trained by the mesh networks in multiple subbands and
observe that the classifiers in the ensemble collaborate with each other to
fuse the complementary information freed at each subband. We conclude that the
fMRI data, recorded during a cognitive task, embed diverse information across
the anatomic regions at each resolution.Comment: 18 page
Advancing ensemble learning performance through data transformation and classifiers fusion in granular computing context
Classification is a special type of machine learning tasks, which is essentially achieved by training a classifier that can be used to classify new instances. In order to train a high performance classifier, it is crucial to extract representative features from raw data, such as text and images. In reality, instances could be highly diverse even if they belong to the same class, which indicates different instances of the same class could represent very different characteristics. For example, in a facial expression recognition task, some instances may be better described by Histogram of Oriented Gradients features, while others may be better presented by Local Binary Patterns features. From this point of view, it is necessary to adopt ensemble learning to train different classifiers on different feature sets and to fuse these classifiers towards more accurate classification of each instance. On the other hand, different algorithms are likely to show different suitability for training classifiers on different feature sets. It shows again the necessity to adopt ensemble learning towards advances in the classification performance. Furthermore, a multi-class classification task would become increasingly more complex when the number of classes is increased, i.e. it would lead to the increased difficulty in terms of discriminating different classes. In this paper, we propose an ensemble learning framework that involves transforming a multi-class classification task into a number of binary classification tasks and fusion of classifiers trained on different feature sets by using different learning algorithms. We report experimental studies on a UCI data set on Sonar and the CK+ data set on facial expression recognition. The results show that our proposed ensemble learning approach leads to considerable advances in classification performance, in comparison with popular learning approaches including decision tree ensembles and deep neural networks. In practice, the proposed approach can be used effectively to build an ensemble of ensembles acting as a group of expert systems, which show the capability to achieve more stable performance of pattern recognition, in comparison with building a single classifier that acts as a single expert system
TSE-IDS: A Two-Stage Classifier Ensemble for Intelligent Anomaly-based Intrusion Detection System
Intrusion detection systems (IDS) play a pivotal role in computer security by discovering and repealing malicious activities in computer networks. Anomaly-based IDS, in particular, rely on classification models trained using historical data to discover such malicious activities. In this paper, an improved IDS based on hybrid feature selection and two-level classifier ensembles is proposed. An hybrid feature selection technique comprising three methods, i.e. particle swarm optimization, ant colony algorithm, and genetic algorithm, is utilized to reduce the feature size of the training datasets (NSL-KDD and UNSW-NB15 are considered in this paper). Features are selected based on the classification performance of a reduced error pruning tree (REPT) classifier. Then, a two-level classifier ensembles based on two meta learners, i.e., rotation forest and bagging, is proposed. On the NSL-KDD dataset, the proposed classifier shows 85.8% accuracy, 86.8% sensitivity, and 88.0% detection rate, which remarkably outperform other classification techniques recently proposed in the literature. Results regarding the UNSW-NB15 dataset also improve the ones achieved by several state of the art techniques. Finally, to verify the results, a two-step statistical significance test is conducted. This is not usually considered by IDS research thus far and, therefore, adds value to the experimental results achieved by the proposed classifier
- …