3,605 research outputs found
A critical assessment of imbalanced class distribution problem: the case of predicting freshmen student attrition
Predicting student attrition is an intriguing yet challenging problem for any academic institution. Class-imbalanced data is a common in the field of student retention, mainly because a lot of students register but fewer students drop out. Classification techniques for imbalanced dataset can yield deceivingly high
prediction accuracy where the overall predictive accuracy is usually driven by the majority class at the expense of having very poor performance on the crucial minority class. In this study, we compared different data balancing techniques to improve the predictive accuracy in minority class while maintaining satisfactory overall classification performance. Specifically, we tested three balancing techniques—oversampling, under-sampling and synthetic minority over-sampling (SMOTE)—along with four popular classification methods—logistic regression, decision trees, neuron networks and support vector machines. We used a large and feature rich institutional student data (between the years 2005 and 2011) to assess the efficacy of both balancing techniques as well as prediction methods. The results indicated that the support vector machine combined with SMOTE data-balancing technique achieved the best classification performance with a 90.24% overall accuracy on the 10-fold holdout sample. All three data-balancing techniques improved the prediction accuracy for the minority class. Applying sensitivity analyses on developed models, we also identified the most important variables for accurate prediction of student attrition. Application of these models has the potential to accurately predict at-risk students and help reduce student dropout rates
Feature selection in credit risk modeling: an international evidence
This paper aims to discover a suitable combination of contemporary feature selection techniques and robust prediction classifiers.
As such, to examine the impact of the feature selection method
on classifier performance, we use two Chinese and three other
real-world credit scoring datasets. The utilized feature selection
methods are the least absolute shrinkage and selection operator
(LASSO), multivariate adaptive regression splines (MARS). In contrast, the examined classifiers are the classification and regression
trees (CART), logistic regression (LR), artificial neural network
(ANN), and support vector machines (SVM). Empirical findings
confirm that LASSO’s feature selection method, followed by
robust classifier SVM, demonstrates remarkable improvement and
outperforms other competitive classifiers. Moreover, ANN also
offers improved accuracy with feature selection methods; LR only
can improve classification efficiency through performing feature
selection via LASSO. Nonetheless, CART does not provide any
indication of improvement in any combination. The proposed
credit scoring modeling strategy may use to develop policy, progressive ideas, operational guidelines for effective credit risk management of lending, and other financial institutions. The finding
of this study has practical value, as to date, there is no consensus
about the combination of feature selection method and prediction classifiers
Comparing the performance of oversampling techniques for imbalanced learning in insurance fraud detection
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsAlthough the current trend of data production is focused on generating tons of it every second, there are situations where the target category is represented extremely unequally, giving rise to imbalanced datasets, analyzing them correctly can lead to relevant decisions that produces appropriate business strategies. Fraud modeling is one example of this situation: it is expected less fraudulent transactions than reliable ones, predict them could be crucial for improving decisions and processes in a company. However, class imbalance produces a negative effect on traditional techniques in dealing with this problem, a lot of techniques have been proposed and oversampling is one of them.
This work analyses the behavior of different oversampling techniques such as Random oversampling, SOMO and SMOTE, through different classifiers and evaluation metrics. The exercise is done with real data from an insurance company in Colombia predicting fraudulent claims for its compulsory auto product. Conclusions of this research demonstrate the advantages of using oversampling for imbalance circumstances but also the importance of comparing different evaluation metrics and classifiers to obtain accurate appropriate conclusions and comparable results
Ensemble of Example-Dependent Cost-Sensitive Decision Trees
Several real-world classification problems are example-dependent
cost-sensitive in nature, where the costs due to misclassification vary between
examples and not only within classes. However, standard classification methods
do not take these costs into account, and assume a constant cost of
misclassification errors. In previous works, some methods that take into
account the financial costs into the training of different algorithms have been
proposed, with the example-dependent cost-sensitive decision tree algorithm
being the one that gives the highest savings. In this paper we propose a new
framework of ensembles of example-dependent cost-sensitive decision-trees. The
framework consists in creating different example-dependent cost-sensitive
decision trees on random subsamples of the training set, and then combining
them using three different combination approaches. Moreover, we propose two new
cost-sensitive combination approaches; cost-sensitive weighted voting and
cost-sensitive stacking, the latter being based on the cost-sensitive logistic
regression method. Finally, using five different databases, from four
real-world applications: credit card fraud detection, churn modeling, credit
scoring and direct marketing, we evaluate the proposed method against
state-of-the-art example-dependent cost-sensitive techniques, namely,
cost-proportionate sampling, Bayes minimum risk and cost-sensitive decision
trees. The results show that the proposed algorithms have better results for
all databases, in the sense of higher savings.Comment: 13 pages, 6 figures, Submitted for possible publicatio
A big data MapReduce framework for fault diagnosis in cloud-based manufacturing
This research develops a MapReduce framework for automatic pattern recognition based on fault diagnosis by solving data imbalance problem in a cloud-based manufacturing (CBM). Fault diagnosis in a CBM system significantly contributes to reduce the product testing cost and enhances manufacturing quality. One of the major challenges facing the big data analytics in cloud-based manufacturing is handling of datasets, which are highly imbalanced in nature due to poor classification result when machine learning techniques are applied on such datasets. The framework proposed in this research uses a hybrid approach to deal with big dataset for smarter decisions. Furthermore, we compare the performance of radial basis function based Support Vector Machine classifier with standard techniques. Our findings suggest that the most important task in cloud-based manufacturing, is to predict the effect of data errors on quality due to highly imbalance unstructured dataset. The proposed framework is an original contribution to the body of literature, where our proposed MapReduce framework has been used for fault detection by managing data imbalance problem appropriately and relating it to firm’s profit function. The experimental results are validated using a case study of steel plate manufacturing fault diagnosis, with crucial performance matrices such as accuracy, specificity and sensitivity. A comparative study shows that the methods used in the proposed framework outperform the traditional ones
Precision-Recall Curve (PRC) Classification Trees
The classification of imbalanced data has presented a significant challenge
for most well-known classification algorithms that were often designed for data
with relatively balanced class distributions. Nevertheless skewed class
distribution is a common feature in real world problems. It is especially
prevalent in certain application domains with great need for machine learning
and better predictive analysis such as disease diagnosis, fraud detection,
bankruptcy prediction, and suspect identification. In this paper, we propose a
novel tree-based algorithm based on the area under the precision-recall curve
(AUPRC) for variable selection in the classification context. Our algorithm,
named as the "Precision-Recall Curve classification tree", or simply the "PRC
classification tree" modifies two crucial stages in tree building. The first
stage is to maximize the area under the precision-recall curve in node variable
selection. The second stage is to maximize the harmonic mean of recall and
precision (F-measure) for threshold selection. We found the proposed PRC
classification tree, and its subsequent extension, the PRC random forest, work
well especially for class-imbalanced data sets. We have demonstrated that our
methods outperform their classic counterparts, the usual CART and random forest
for both synthetic and real data. Furthermore, the ROC classification tree
proposed by our group previously has shown good performance in imbalanced data.
The combination of them, the PRC-ROC tree, also shows great promise in
identifying the minority class
- …