10,389 research outputs found
An Empirical Study on the Joint Impact of Feature Selection and Data Re-sampling on Imbalance Classification
In predictive tasks, real-world datasets often present di erent degrees of imbalanced (i.e., long-tailed or skewed) distributions.
While the majority (the head or the most frequent) classes have su cient samples, the minority (the tail or
the less frequent or rare) classes can be under-represented by a rather limited number of samples. Data pre-processing
has been shown to be very e ective in dealing with such problems. On one hand, data re-sampling is a common
approach to tackling class imbalance. On the other hand, dimension reduction, which reduces the feature space, is a
conventional technique for reducing noise and inconsistencies in a dataset. However, the possible synergy between
feature selection and data re-sampling for high-performance imbalance classification has rarely been investigated before.
To address this issue, we carry out a comprehensive empirical study on the joint influence of feature selection and
re-sampling on two-class imbalance classification. Specifically, we study the performance of two opposite pipelines
for imbalance classification by applying feature selection before or after data re-sampling. We conduct a large number
of experiments, with a total of 9225 tests, on 52 publicly available datasets, using 9 feature selection methods, 6 resampling
approaches for class imbalance learning, and 3 well-known classification algorithms. Experimental results
show that there is no constant winner between the two pipelines; thus both of them should be considered to derive
the best performing model for imbalance classification. We find that the performance of an imbalance classification
model not only depends on the classifier adopted and the ratio between the number of majority and minority samples,
but also depends on the ratio between the number of samples and features. Overall, this study should provide new
reference value for researchers and practitioners in imbalance learning.TIN2017-89517-
Hellinger Distance Trees for Imbalanced Streams
Classifiers trained on data sets possessing an imbalanced class distribution
are known to exhibit poor generalisation performance. This is known as the
imbalanced learning problem. The problem becomes particularly acute when we
consider incremental classifiers operating on imbalanced data streams,
especially when the learning objective is rare class identification. As
accuracy may provide a misleading impression of performance on imbalanced data,
existing stream classifiers based on accuracy can suffer poor minority class
performance on imbalanced streams, with the result being low minority class
recall rates. In this paper we address this deficiency by proposing the use of
the Hellinger distance measure, as a very fast decision tree split criterion.
We demonstrate that by using Hellinger a statistically significant improvement
in recall rates on imbalanced data streams can be achieved, with an acceptable
increase in the false positive rate.Comment: 6 Pages, 2 figures, to be published in Proceedings 22nd International
Conference on Pattern Recognition (ICPR) 201
Imbalanced Ensemble Classifier for learning from imbalanced business school data set
Private business schools in India face a common problem of selecting quality
students for their MBA programs to achieve the desired placement percentage.
Generally, such data sets are biased towards one class, i.e., imbalanced in
nature. And learning from the imbalanced dataset is a difficult proposition.
This paper proposes an imbalanced ensemble classifier which can handle the
imbalanced nature of the dataset and achieves higher accuracy in case of the
feature selection (selection of important characteristics of students) cum
classification problem (prediction of placements based on the students'
characteristics) for Indian business school dataset. The optimal value of an
important model parameter is found. Numerical evidence is also provided using
Indian business school dataset to assess the outstanding performance of the
proposed classifier
- …