4,542 research outputs found
Imbalanced Ensemble Classifier for learning from imbalanced business school data set
Private business schools in India face a common problem of selecting quality
students for their MBA programs to achieve the desired placement percentage.
Generally, such data sets are biased towards one class, i.e., imbalanced in
nature. And learning from the imbalanced dataset is a difficult proposition.
This paper proposes an imbalanced ensemble classifier which can handle the
imbalanced nature of the dataset and achieves higher accuracy in case of the
feature selection (selection of important characteristics of students) cum
classification problem (prediction of placements based on the students'
characteristics) for Indian business school dataset. The optimal value of an
important model parameter is found. Numerical evidence is also provided using
Indian business school dataset to assess the outstanding performance of the
proposed classifier
Local case-control sampling: Efficient subsampling in imbalanced data sets
For classification problems with significant class imbalance, subsampling can
reduce computational costs at the price of inflated variance in estimating
model parameters. We propose a method for subsampling efficiently for logistic
regression by adjusting the class balance locally in feature space via an
accept-reject scheme. Our method generalizes standard case-control sampling,
using a pilot estimate to preferentially select examples whose responses are
conditionally rare given their features. The biased subsampling is corrected by
a post-hoc analytic adjustment to the parameters. The method is simple and
requires one parallelizable scan over the full data set. Standard case-control
sampling is inconsistent under model misspecification for the population
risk-minimizing coefficients . By contrast, our estimator is
consistent for provided that the pilot estimate is. Moreover, under
correct specification and with a consistent, independent pilot estimate, our
estimator has exactly twice the asymptotic variance of the full-sample MLE -
even if the selected subsample comprises a miniscule fraction of the full data
set, as happens when the original data are severely imbalanced. The factor of
two improves to if we multiply the baseline acceptance
probabilities by (and weight points with acceptance probability greater
than 1), taking roughly times as many data points into the
subsample. Experiments on simulated and real data show that our method can
substantially outperform standard case-control subsampling.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1220 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …