710 research outputs found

    Classification System for Mortgage Arrear Management

    Get PDF

    Classification System for Mortgage Arrear Management

    Get PDF

    The Effect of Class Imbalance Handling on Datasets Toward Classification Algorithm Performance

    Get PDF
    Class imbalance is a condition where the amount of data in the minority class is smaller than that of the majority class. The impact of the class imbalance in the dataset is the occurrence of minority class misclassification, so it can affect classification performance. Various approaches have been taken to deal with the problem of class imbalances such as the data level approach, algorithmic level approach, and cost-sensitive learning. At the data level, one of the methods used is to apply the sampling method. In this study, the ADASYN, SMOTE, and SMOTE-ENN sampling methods were used to deal with the problem of class imbalance combined with the AdaBoost, K-Nearest Neighbor, and Random Forest classification algorithms. The purpose of this study was to determine the effect of handling class imbalances on the dataset on classification performance. The tests were carried out on five datasets and based on the results of the classification the integration of the ADASYN and Random Forest methods gave better results compared to other model schemes. The criteria used to evaluate include accuracy, precision, true positive rate, true negative rate, and g-mean score. The results of the classification of the integration of the ADASYN and Random Forest methods gave 5% to 10% better than other models

    Handling of realistic missing data scenarios in clinical trials using machine learning techniques

    Get PDF
    Missing data problem is a common challenge when designing and analyzing clinical trials, which are the data that are needed for the main analyses but are not collected. If the missing data are not properly imputed/handled, they may cause following issues: reduce the statistical power of the important analysis; they may bias/ confound the treatment effect estimation; they may cause an underestimation of the variability in target variable. Three different types of missingness are defined in Rubin’s 1976 paper. (1) MCAR (missing completely at random): when data are MCAR, “the probability of missingness does not depend on observed or unobserved measurements”, for example, subjects who dropout from the trial due to the reasons that are not related to their health status. (2) MAR (missing at random): when data are MAR, “the probability of missingness depends only on observed measurements conditional on the covariates in the model”, for example, younger subjects (those who don’t think it is necessary to measure their blood pressure as they consider themselves healthier) may more likely to have missing blood pressure. (3) MNAR (missing not at random): when data are MNAR, “the probability of missingness depends on unobserved measurements”, for example, subjects leave the trial because of “lack of efficacy” (i.e., they are not convinced by effec-tiveness of the study drug and hence dropout from the trial). Although all three types of missing data are well defined, it is very difficult to determine the association between missing data and unobserved outcomes in the real-world data; in other words, it is very difficult to justify the MAR assumption in any realistic situation. As EMA suggested in 2010, a combined strategy can be used, e.g., treat the discontinu-ations due to “lack of efficacy” as MNAR data, and treat the discontinuations due to “lost to follow-up” as MAR data. Many statistical methods have been developed to handle missing data under the prerequisite assumption of either MNAR or MAR. However, in the real world, missing data are often mixed with different types of missing mechanisms. This violates the basic assumptions for missing data (i.e., either MNAR or MAR), which leads to a degradation in the processing performance of these methods (Enders, 2010). To handle the missing data problem in reallife situations (e.g., MNAR and MAR mixed together in the same dataset), we propose a missing data prediction framework that are based on machine learning techniques. As Breiman pointed out in his 2001 paper, in the statistical (ma-chine) learning exercise, “the goal is not interpretability, but accurate information”. Along this line of thought, our methods handle MNAR by focusing on (giving more sample weights to) the missing part, meanwhile, and also to handle the MAR data by looking for precise individual (subject-level) information. The problem of MNAR is seen as an imbalanced machine learning exercise, i.e., to oversample the minority cases to compen-sate for the data that are MNAR in certain area

    Improving the matching of registered unemployed to job offers through machine learning algorithms

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceDue to the existence of a double-sided asymmetric information problem on the labour market characterized by a mutual lack of trust by employers and unemployed people, not enough job matches are facilitated by public employment services (PES), which seem to be caught in a low-end equilibrium. In order to act as a reliable third party, PES need to build a good and solid reputation among their main clients by offering better and less time consuming pre-selection services. The use of machine-learning, data-driven relevancy algorithms that calculate the viability of a specific candidate for a particular job opening is becoming increasingly popular in this field. Based on the Portuguese PES databases (CVs, vacancies, pre-selection and matching results), complemented by relevant external data published by Statistics Portugal and the European Classification of Skills/Competences, Qualifications and Occupations (ESCO), the current thesis evaluates the potential application of models such as Random Forests, Gradient Boosting, Support Vector Machines, Neural Networks Ensembles and other tree-based ensembles to the job matching activities that are carried out by the Portuguese PES, in order to understand the extent to which the latter can be improved through the adoption of automated processes. The obtained results seem promising and point to the possible use of robust algorithms such as Random Forests within the pre-selection of suitable candidates, due to their advantages at various levels, namely in terms of accuracy, capacity to handle large datasets with thousands of variables, including badly unbalanced ones, as well as extensive missing values and many-valued categorical variables

    Predicting Patient Satisfaction With Ensemble Methods

    Get PDF
    Health plans are constantly seeking ways to assess and improve the quality of patient experience in various ambulatory and institutional settings. Standardized surveys are a common tool used to gather data about patient experience, and a useful measurement taken from these surveys is known as the Net Promoter Score (NPS). This score represents the extent to which a patient would, or would not, recommend his or her physician on a scale from 0 to 10, where 0 corresponds to Extremely unlikely and 10 to Extremely likely . A large national health plan utilized automated calls to distribute such a survey to its members and was interested in understanding what factors contributed to a patient\u27s satisfaction. Additionally, they were interested in whether or not NPS could be predicted using responses from other questions on the survey, along with demographic data. When the distribution of various predictors was compared between the less satisfied and highly satisfied members, there was significant overlap, indicating that not even the Bayes Classifier could successfully differentiate between these members. Moreover, the highly imbalanced proportion of NPS responses resulted in initial poor prediction accuracy. Thus, due to the non-linear structure of the data, and high number of categorical predictors, we have leveraged flexible methods, such as decision trees, bagging, and random forests, for modeling and prediction. We further altered the prediction step in the random forest algorithm in order to account for the imbalanced structure of the data

    Predicting disease risks from highly imbalanced data using random forest

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We present a method utilizing Healthcare Cost and Utilization Project (HCUP) dataset for predicting disease risk of individuals based on their medical diagnosis history. The presented methodology may be incorporated in a variety of applications such as risk management, tailored health communication and decision support systems in healthcare.</p> <p>Methods</p> <p>We employed the National Inpatient Sample (NIS) data, which is publicly available through Healthcare Cost and Utilization Project (HCUP), to train random forest classifiers for disease prediction. Since the HCUP data is highly imbalanced, we employed an ensemble learning approach based on repeated random sub-sampling. This technique divides the training data into multiple sub-samples, while ensuring that each sub-sample is fully balanced. We compared the performance of support vector machine (SVM), bagging, boosting and RF to predict the risk of eight chronic diseases.</p> <p>Results</p> <p>We predicted eight disease categories. Overall, the RF ensemble learning method outperformed SVM, bagging and boosting in terms of the area under the receiver operating characteristic (ROC) curve (AUC). In addition, RF has the advantage of computing the importance of each variable in the classification process.</p> <p>Conclusions</p> <p>In combining repeated random sub-sampling with RF, we were able to overcome the class imbalance problem and achieve promising results. Using the national HCUP data set, we predicted eight disease categories with an average AUC of 88.79%.</p

    An Examination of the Smote and Other Smote-based Techniques That Use Synthetic Data to Oversample the Minority Class in the Context of Credit-Card Fraud Classification

    Get PDF
    This research project seeks to investigate some of the different sampling techniques that generate and use synthetic data to oversample the minority class as a means of handling the imbalanced distribution between non-fraudulent (majority class) and fraudulent (minority class) classes in a credit-card fraud dataset. The purpose of the research project is to assess the effectiveness of these techniques in the context of fraud detection which is a highly imbalanced and cost-sensitive dataset. Machine learning tasks that require learning from datasets that are highly unbalanced have difficulty learning since many of the traditional learning algorithms are not designed to cope with large differentials between classes. For that reason, various different methods have been developed to help tackle this problem. Oversampling and undersampling are examples of techniques that help deal with the class imbalance problem through sampling. This paper will evaluate oversampling techniques that use synthetic data to balance the minority class. The idea of using synthetic data to compensate for the minority class was first proposed by (Chawla et al., 2002). The technique is known as Synthetic Minority Over-Sampling Technique (SMOTE). Following the development of the technique, other techniques were developed from it. This paper will evaluate the SMOTE technique along with other also popular SMOTE-based extensions of the original technique

    Learning from a Class Imbalanced Public Health Dataset: a Cost-based Comparison of Classifier Performance

    Get PDF
    Public health care systems routinely collect health-related data from the population. This data can be analyzed using data mining techniques to find novel, interesting patterns, which could help formulate effective public health policies and interventions. The occurrence of chronic illness is rare in the population and the effect of this class imbalance, on the performance of various classifiers was studied. The objective of this work is to identify the best classifiers for class imbalanced health datasets through a cost-based comparison of classifier performance. The popular, open-source data mining tool WEKA, was used to build a variety of core classifiers as well as classifier ensembles, to evaluate the classifiers’ performance. The unequal misclassification costs were represented in a cost matrix, and cost-benefit analysis was also performed.  In another experiment, various sampling methods such as under-sampling, over-sampling, and SMOTE was performed to balance the class distribution in the dataset, and the costs were compared. The Bayesian classifiers performed well with a high recall, low number of false negatives and were not affected by the class imbalance. Results confirm that total cost of Bayesian classifiers can be further reduced using cost-sensitive learning methods. Classifiers built using the random under-sampled dataset showed a dramatic drop in costs and high classification accuracy

    An Ensemble Model for Multiclass Classification and Outlier Detection Method in Data Mining

    Get PDF
    Real life world datasets exhibit a multiclass classification structure characterized by imbalance classes. Minority classes are treated as outliers’ classes. The study used cross-industry process for data mining methodology. A heterogeneous multiclass ensemble was developed by combining several strategies and ensemble techniques. The datasets used were drawn from UCI machine learning repository. Experiments for validating the model were conducted and represented in form of tables and figures. An ensemble filter selection method was developed and used for preprocessing datasets. Point-outliers were filtered using Inter quartile range filter algorithm. Datasets were resampled using Synthetic minority oversampling technique (SMOTE) algorithm. Multiclass datasets were transformed to binary classes using OnevsOne decomposing technique. An Ensemble model was developed using adaboost and random subspace algorithms utilizing random forest as the base classifier. The classifiers built were combined using voting methodology. The model was validated with classification and outlier metric performance measures such as Recall, Precision, F-measure and AUCROC values. The classifiers were evaluated using 10 fold stratified cross validation. The model showed better performance in terms of outlier detection and classification prediction for multiclass problem. The model outperformed other well-known existing classification and outlier detection algorithms such as Naïve bayes, KNN, Bagging, JRipper, Decision trees, RandomTree and Random forest. The study findings established ensemble techniques, resampling datasets and decomposing multiclass results in an improved detection of minority outlier (rare) classes. Keywords: Multiclass, Outlier, Ensemble, Model, Classification DOI: 10.7176/JIEA/9-2-04 Publication date: April 30th 2019
    corecore