168 research outputs found

    Building well-performing classifier ensembles: model and decision level combination.

    Get PDF
    There is a continuing drive for better, more robust generalisation performance from classification systems, and prediction systems in general. Ensemble methods, or the combining of multiple classifiers, have become an accepted and successful tool for doing this, though the reasons for success are not always entirely understood. In this thesis, we review the multiple classifier literature and consider the properties an ensemble of classifiers - or collection of subsets - should have in order to be combined successfully. We find that the framework of Stochastic Discrimination provides a well-defined account of these properties, which are shown to be strongly encouraged in a number of the most popular/successful methods in the literature via differing algorithmic devices. This uncovers some interesting and basic links between these methods, and aids understanding of their success and operation in terms of a kernel induced on the training data, with form particularly well suited to classification. One property that is desirable in both the SD framework and in a regression context, the ambiguity decomposition of the error, is de-correlation of individuals. This motivates the introduction of the Negative Correlation Learning method, in which neural networks are trained in parallel in a way designed to encourage de-correlation of the individual networks. The training is controlled by a parameter λ governing the extent to which correlations are penalised. Theoretical analysis of the dynamics of training results in an exact expression for the interval in which we can choose λ while ensuring stability of the training, and a value λ∗ for which the training has some interesting optimality properties. These values depend only on the size N of the ensemble. Decision level combination methods often result in a difficult to interpret model, and NCL is no exception. However in some applications, there is a need for understandable decisions and interpretable models. In response to this, we depart from the standard decision level combination paradigm to introduce a number of model level combination methods. As decision trees are one of the most interpretable model structures used in classification, we chose to combine structure from multiple individual trees to build a single combined model. We show that extremely compact, well performing models can be built in this way. In particular, a generalisation of bottom-up pruning to a multiple-tree context produces good results in this regard. Finally, we develop a classification system for a real-world churn prediction problem, illustrating some of the concepts introduced in the thesis, and a number of more practical considerations which are of importance when developing a prediction system for a specific problem

    A Survey and Implementation of Machine Learning Algorithms for Customer Churn Prediction

    Get PDF
    Estimating customer traffic is an important task for businesses because it helps them identify customers who are most likely to leave and take preventative measures to retain them by improving customer satisfaction and further increasing their own revenue. In this article, we focus on developing a machine-learning model for predicting customer churn using historical customer data We performed engineering operations on the data, addressed the missing digits, coded the categorical variables, and preprocessed the data before evaluating it using a variety of performance indicators, including accuracy, precision, recall, f1 score, and ROC AUC_Score. Our feature significance analysis revealed that monthly fees, customer tenure, contract type, and payment method are the factors that have the most impact on forecasting customer churn. Finally, we conclude the best-performing model, the Soft Voting Classifier, implemented on the four best-performing classifiers with a good accuracy of 0.78 and a relatively better ROC AUC_Score of 0.82

    An Efficient Hybrid Classifier Model for Customer Churn Prediction

    Get PDF
    Customer churn prediction is used to retain customers at the highest risk of churn by proactively engaging with them. Many machine learning-based data mining approaches have been previously used to predict client churn. Although, single model classifiers increase the scattering of prediction with a low model performance which degrades reliability of the model. Hence, Bag of learners based Classification is used in which learners with high performance are selected to estimate wrongly and correctly classified instances thereby increasing the robustness of model performance.  Furthermore, loss of interpretability in the model during prediction leads to insufficient prediction accuracy.  Hence, an Associative classifier with Apriori Algorithm is introduced as a booster that integrates classification and association rule mining to build a strong classification model in which frequent items are obtained using Apriori Algorithm. Also, accurate prediction is provided by testing wrongly classified instances from the bagging phase using generated rules in an associative classifier. The proposed models are then simulated in Python platform and the results achieved high accuracy, ROC score, precision, specificity, F-measure, and recall

    Projection pursuit random forest using discriminant feature analysis model for churners prediction in telecom industry

    Get PDF
    A major and demanding issue in the telecommunications industry is the prediction of churn customers. Churn describes the customer who is attrite from one Telecom service provider to competitors searching for better services offers. Companies from the Telco sector frequently have customer relationship management offices it is the main objective in how to win back defecting clients because preserve long-term customers can be much more beneficial to a company than gain newly recruited customers. Researchers and practitioners are paying great attention and investing more in developing a robust customer churn prediction model, especially in the telecommunication business by proposed numerous machine learning approaches. Many approaches of Classification are established, but the most effective in recent times is a tree-based method. The main contribution of this research is to predict churners/non-churners in the Telecom sector based on project pursuit Random Forest (PPForest) that uses discriminant feature analysis as a novelty extension of the conventional Random Forest approach for learning oblique Project Pursuit tree (PPtree). The proposed methodology leverages the advantage of two discriminant analysis methods to calculate the project index used in the construction of PPtree. The first method used Support Vector Machines (SVM) as a classifier in the construction of PPForest to differentiate between churners and non-churners customers. The second method is a Linear Discriminant Analysis (LDA) to achieve linear splitting of variables node during oblique PPtree construction to produce individual classifiers that are robust and more diverse than classical Random Forest. It found that the proposed methods enjoy the best performance measurements e.g. Accuracy, hit rate, ROC curve, Gini coefficient, Kolmogorov-Smirnov statistic and lift coefficient, H-measure, AUC. Moreover, PPForest based on direct applied of LDA on the raw data delivers an effective evaluator for the customer churn prediction model

    Building well-performing classifier ensembles : model and decision level combination

    Get PDF
    There is a continuing drive for better, more robust generalisation performance from classification systems, and prediction systems in general. Ensemble methods, or the combining of multiple classifiers, have become an accepted and successful tool for doing this, though the reasons for success are not always entirely understood. In this thesis, we review the multiple classifier literature and consider the properties an ensemble of classifiers - or collection of subsets - should have in order to be combined successfully. We find that the framework of Stochastic Discrimination provides a well-defined account of these properties, which are shown to be strongly encouraged in a number of the most popular/successful methods in the literature via differing algorithmic devices. This uncovers some interesting and basic links between these methods, and aids understanding of their success and operation in terms of a kernel induced on the training data, with form particularly well suited to classification. One property that is desirable in both the SD framework and in a regression context, the ambiguity decomposition of the error, is de-correlation of individuals. This motivates the introduction of the Negative Correlation Learning method, in which neural networks are trained in parallel in a way designed to encourage de-correlation of the individual networks. The training is controlled by a parameter λ governing the extent to which correlations are penalised. Theoretical analysis of the dynamics of training results in an exact expression for the interval in which we can choose λ while ensuring stability of the training, and a value λ∗ for which the training has some interesting optimality properties. These values depend only on the size N of the ensemble. Decision level combination methods often result in a difficult to interpret model, and NCL is no exception. However in some applications, there is a need for understandable decisions and interpretable models. In response to this, we depart from the standard decision level combination paradigm to introduce a number of model level combination methods. As decision trees are one of the most interpretable model structures used in classification, we chose to combine structure from multiple individual trees to build a single combined model. We show that extremely compact, well performing models can be built in this way. In particular, a generalisation of bottom-up pruning to a multiple-tree context produces good results in this regard. Finally, we develop a classification system for a real-world churn prediction problem, illustrating some of the concepts introduced in the thesis, and a number of more practical considerations which are of importance when developing a prediction system for a specific problem.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    INTEGRATING KANO MODEL WITH DATA MINING TECHNIQUES TO ENHANCE CUSTOMER SATISFACTION

    Get PDF
    The business world is becoming more competitive from time to time; therefore, businesses are forced to improve their strategies in every single aspect. So, determining the elements that contribute to the clients\u27 contentment is one of the critical needs of businesses to develop successful products in the market. The Kano model is one of the models that help determine which features must be included in a product or service to improve customer satisfaction. The model focuses on highlighting the most relevant attributes of a product or service along with customers’ estimation of how these attributes can be used to predict satisfaction with specific services or products. This research aims at developing a method to integrate the Kano model and data mining approaches to select relevant attributes that drive customer satisfaction, with a specific focus on higher education. The significant contribution of this research is to improve the quality of United Arab Emirates University academic support and development services provided to their students by solving the problem of selecting features that are not methodically correlated to customer satisfaction, which could reduce the risk of investing in features that could ultimately be irrelevant to enhancing customer satisfaction. Questionnaire data were collected from 646 students from United Arab Emirates University. The experiment suggests that Extreme Gradient Boosting Regression can produce the best results for this kind of problem. Based on the integration of the Kano model and the feature selection method, the number of features used to predict customer satisfaction is minimized to four features. It was found that either Chi-Square or Analysis of Variance (ANOVA) features selection model’s integration with the Kano model giving higher values of Pearson correlation coefficient and R2. Moreover, the prediction was made using union features between the Kano model\u27s most important features and the most frequent features among 8 clusters. It shows high-performance results

    Clustering Prediction Techniques in Defining and Predicting Customers Defection: The Case of E-Commerce Context

    Get PDF
    With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several e-commerce sites and compare their competitors’ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-Recency-Frequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macro-averaging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers’ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection

    Machine Learning Techniques Applied to Telecommunication Data

    Get PDF
    In attesa di ABSTRACT- Tecniche di Machine Learning Applicate a Dati di Telecomunicazion

    Analytical customer relationship management in retailing supported by data mining techniques

    Get PDF
    Tese de doutoramento. Engenharia Industrial e Gestão. Faculdade de Engenharia. Universidade do Porto. 201

    Customer churn prediction in the banking industry

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsThe objective of this project is to create a predictive model that will decrease customer churn in a Portuguese bank. That is, we intend to identify customers who could be considering closing their checking accounts. For the bank to be able to take the necessary corrective measures, the model also aims to determine the characteristics of the customers that decided to leave. This model will make use of customer data that the organization already has to hand. Data pre-processing with data cleansing, transformation, and reduction was the initial stage of the analysis. The dataset is imbalanced, meaning that we have a small number of positive outcomes or churners; thus, under-sampling and other approaches were employed to address this issue. The predictive models used are logistic regression, support vector machine, decision trees and artificial neural networks, and for each, parameter tuning was also conducted. In conclusion, regarding the customer churn prediction, the recommended model is a support vector machine with a precision of 0.84 and an AUROC of 0.905. These findings will contribute to the customer lifetime value, helping the bank better understand their customers' behavior and allow them to draw strategies accordingly with the information obtained
    corecore