98 research outputs found
Hellinger Distance Trees for Imbalanced Streams
Classifiers trained on data sets possessing an imbalanced class distribution
are known to exhibit poor generalisation performance. This is known as the
imbalanced learning problem. The problem becomes particularly acute when we
consider incremental classifiers operating on imbalanced data streams,
especially when the learning objective is rare class identification. As
accuracy may provide a misleading impression of performance on imbalanced data,
existing stream classifiers based on accuracy can suffer poor minority class
performance on imbalanced streams, with the result being low minority class
recall rates. In this paper we address this deficiency by proposing the use of
the Hellinger distance measure, as a very fast decision tree split criterion.
We demonstrate that by using Hellinger a statistically significant improvement
in recall rates on imbalanced data streams can be achieved, with an acceptable
increase in the false positive rate.Comment: 6 Pages, 2 figures, to be published in Proceedings 22nd International
Conference on Pattern Recognition (ICPR) 201
Imbalanced Ensemble Classifier for learning from imbalanced business school data set
Private business schools in India face a common problem of selecting quality
students for their MBA programs to achieve the desired placement percentage.
Generally, such data sets are biased towards one class, i.e., imbalanced in
nature. And learning from the imbalanced dataset is a difficult proposition.
This paper proposes an imbalanced ensemble classifier which can handle the
imbalanced nature of the dataset and achieves higher accuracy in case of the
feature selection (selection of important characteristics of students) cum
classification problem (prediction of placements based on the students'
characteristics) for Indian business school dataset. The optimal value of an
important model parameter is found. Numerical evidence is also provided using
Indian business school dataset to assess the outstanding performance of the
proposed classifier
A class skew-insensitive ACO-based decision tree algorithm for imbalanced data sets
Ant-tree-miner (ATM) has an advantage over the conventional decision tree algorithm in terms of feature selection. However, real world applications commonly involved imbalanced class problem where the classes have different importance. This condition impeded the entropy-based heuristic of existing ATM algorithm to develop effective decision boundaries due to its
biasness towards the dominant class. Consequently, the induced decision trees are dominated by the majority class which lack in predictive ability on
the rare class. This study proposed an enhanced algorithm called hellingerant-tree-miner (HATM) which is inspired by ant colony optimization (ACO)
metaheuristic for imbalanced learning using decision tree classification algorithm. The proposed algorithm was compared to the existing algorithm, ATM in nine (9) publicly available imbalanced data sets. Simulation study reveals the superiority of HATM when the sample size increases with skewed class (Imbalanced Ratio < 50%). Experimental results demonstrate the performance of the existing algorithm measured by BACC has been improved due to the class skew in sensitiveness of hellinger distance. The statistical significance test shows that HATM has higher mean BACC scorethan ATM
Box Drawings for Learning with Imbalanced Data
The vast majority of real world classification problems are imbalanced,
meaning there are far fewer data from the class of interest (the positive
class) than from other classes. We propose two machine learning algorithms to
handle highly imbalanced classification problems. The classifiers constructed
by both methods are created as unions of parallel axis rectangles around the
positive examples, and thus have the benefit of being interpretable. The first
algorithm uses mixed integer programming to optimize a weighted balance between
positive and negative class accuracies. Regularization is introduced to improve
generalization performance. The second method uses an approximation in order to
assist with scalability. Specifically, it follows a \textit{characterize then
discriminate} approach, where the positive class is characterized first by
boxes, and then each box boundary becomes a separate discriminative classifier.
This method has the computational advantages that it can be easily
parallelized, and considers only the relevant regions of feature space
A survey on learning from imbalanced data streams: taxonomy, challenges, empirical study, and reproducible experimental framework
Class imbalance poses new challenges when it comes to classifying data
streams. Many algorithms recently proposed in the literature tackle this
problem using a variety of data-level, algorithm-level, and ensemble
approaches. However, there is a lack of standardized and agreed-upon procedures
on how to evaluate these algorithms. This work presents a taxonomy of
algorithms for imbalanced data streams and proposes a standardized, exhaustive,
and informative experimental testbed to evaluate algorithms in a collection of
diverse and challenging imbalanced data stream scenarios. The experimental
study evaluates 24 state-of-the-art data streams algorithms on 515 imbalanced
data streams that combine static and dynamic class imbalance ratios,
instance-level difficulties, concept drift, real-world and semi-synthetic
datasets in binary and multi-class scenarios. This leads to the largest
experimental study conducted so far in the data stream mining domain. We
discuss the advantages and disadvantages of state-of-the-art classifiers in
each of these scenarios and we provide general recommendations to end-users for
selecting the best algorithms for imbalanced data streams. Additionally, we
formulate open challenges and future directions for this domain. Our
experimental testbed is fully reproducible and easy to extend with new methods.
This way we propose the first standardized approach to conducting experiments
in imbalanced data streams that can be used by other researchers to create
trustworthy and fair evaluation of newly proposed methods. Our experimental
framework can be downloaded from
https://github.com/canoalberto/imbalanced-streams
Hellinger Distance Decision Tree (HDDT) Classification of Gender with Imbalance Statistical Face Features
Face recognition is one of the technologies used for assets protection. Face recognition also presents a challenging problem in the field of image and computer vision and has been used for the application such as face tracking and personal identification. It also frequently used in a security system such as a security camera in airport, banks, and offices. Practically, there are problems in improving face recognition performance, particularly for gender identification. It is very difficult to differentiate the person based on face appearance from different poses, lighting, expressions, aging and illumination. Sometimes it is also difficult to identify the shape of human faces because different people have a different structure of faces. This study used image retrieved from Student Information Management Systems (SIMS)from 10 male and 43 female students who're taking MAT530. The image was then generated 12 geometric landmarks using TI nspire software. The main goal of this research is to classify the gender through the images of faces and to resolve for imbalance data using Hellinger Distance Decision Tree (HDDT) classifier. This classifier was proposed as an alternative to decision tree technique which used Hellinger Distance as the splitting criteria. The result from the validation split shows that percentage split at 40% produced the highest value of accuracy rate at 77.2727% and has the most significant value of sensitivity and specificity
- …