174 research outputs found

    Cost-sensitive deep neural network ensemble for class imbalance problem

    Full text link
    In data mining, classification is a task to build a model which classifies data into a given set of categories. Most classification algorithms assume the class distribution of data to be roughly balanced. In real-life applications such as direct marketing, fraud detection and churn prediction, class imbalance problem usually occurs. Class imbalance problem is referred to the issue that the number of examples belonging to a class is significantly greater than those of the others. When training a standard classifier with class imbalance data, the classifier is usually biased toward majority class. However, minority class is the class of interest and more significant than the majority class. In the literature, existing methods such as data-level, algorithmic-level and cost-sensitive learning have been proposed to address this problem. The experiments discussed in these studies were usually conducted on relatively small data sets or even on artificial data. The performance of the methods on modern real-life data sets, which are more complicated, is unclear. In this research, we study the background and some of the state-of-the-art approaches which handle class imbalance problem. We also propose two costsensitive methods to address class imbalance problem, namely Cost-Sensitive Deep Neural Network (CSDNN) and Cost-Sensitive Deep Neural Network Ensemble (CSDE). CSDNN is a deep neural network based on Stacked Denoising Autoencoders (SDAE). We propose CSDNN by incorporating cost information of majority and minority class into the cost function of SDAE to make it costsensitive. Another proposed method, CSDE, is an ensemble learning version of CSDNN which is proposed to improve the generalization performance on class imbalance problem. In the first step, a deep neural network based on SDAE is created for layer-wise feature extraction. Next, we perform Bagging’s resampling procedure with undersampling to split training data into a number of bootstrap samples. In the third step, we apply a layer-wise feature extraction method to extract new feature samples from each of the hidden layer(s) of the SDAE. Lastly, the ensemble learning is performed by using each of the new feature samples to train a CSDNN classifier with random cost vector. Experiments are conducted to compare the proposed methods with the existing methods. We examine their performance on real-life data sets in business domains. The results show that the proposed methods obtain promising results in handling class imbalance problem and also outperform all the other compared methods. There are three major contributions to this work. First, we proposed CSDNN method in which misclassification costs are considered in training process. Second, we incorporate random undersampling with layer-wise feature extraction to perform ensemble learning. Third, this is the first work that conducts experiments on class imbalance problem using large real-life data sets in different business domains ranging from direct marketing, churn prediction, credit scoring, fraud detection to fake review detection

    Ensemble Learning for fraud detection in Online Payment System: Fraud Detection in Online Payment System

    Get PDF
    The imbalanced problem in fraud detection systems refers to the unequal distribution of fraud cases and non-fraud cases in the information that is used to train machine learning models. This can make it difficult to accurately detect fraudulent activity. As a general rule, instances of fraud occur much less frequently than instances of other types of occurrences, which results in a dataset which is very unbalanced. This imbalance can present challenges for machine learning algorithms, as they may become biased towards the majority class (that is, non-fraud cases) and fail to accurately detect fraud. In situations like these, machine learning models may have a high accuracy overall, but a low recall for the minority class (i.e., fraud cases), which means that many instances of fraud will be misclassified as instances of something else and will not be found. In this study, Synthetic Minority Sampling Technique (SMOTE) is used for balancing the data set and the following machine learning algorithms such as decision trees, Enhanced logistic regression, Naive Bayes are used to classify the dataset.Majority Voting mechanism is used to ensemble the DT,NB, ELR methods and analyze the performance of the model. The performance of the Ensemble of various Machine Learning algorithms was superior to that of the other algorithms in terms of accuracy (98.62%), F1 score (95.21%), precision (98.02%), and recall (96.75%)

    Stacked Generalizations in Imbalanced Fraud Data Sets using Resampling Methods

    Full text link
    This study uses stacked generalization, which is a two-step process of combining machine learning methods, called meta or super learners, for improving the performance of algorithms in step one (by minimizing the error rate of each individual algorithm to reduce its bias in the learning set) and then in step two inputting the results into the meta learner with its stacked blended output (demonstrating improved performance with the weakest algorithms learning better). The method is essentially an enhanced cross-validation strategy. Although the process uses great computational resources, the resulting performance metrics on resampled fraud data show that increased system cost can be justified. A fundamental key to fraud data is that it is inherently not systematic and, as of yet, the optimal resampling methodology has not been identified. Building a test harness that accounts for all permutations of algorithm sample set pairs demonstrates that the complex, intrinsic data structures are all thoroughly tested. Using a comparative analysis on fraud data that applies stacked generalizations provides useful insight needed to find the optimal mathematical formula to be used for imbalanced fraud data sets.Comment: 19 pages, 3 figures, 8 table

    A Novel Business Process Prediction Model Using a DeepLearning Method

    Get PDF
    The ability to proactively monitor business pro-cesses is a main competitive differentiator for firms. Processexecution logs generated by process aware informationsystems help to make process specific predictions forenabling a proactive situational awareness. The goal of theproposed approach is to predict the next process event fromthe completed activities of the running process instance,based on the execution log data from previously completedprocess instances. By predicting process events, companiescan initiate timely interventions to address undesired devi-ations from the desired workflow. The paper proposes amulti-stage deep learning approach that formulates the nextevent prediction problem as a classification problem. Fol-lowing a feature pre-processing stage with n-grams andfeature hashing, a deep learning model consisting of anunsupervised pre-training component with stacked autoen-coders and a supervised fine-tuning component is applied.Experiments on a variety of business process log datasetsshow that the multi-stage deep learning approach providespromising results. The study also compared the results toexisting deep recurrent neural networks and conventionalclassification approaches. Furthermore, the paper addressesthe identification of suitable hyperparameters for the pro-posed approach, and the handling of the imbalanced nature ofbusiness process event datasets

    Predicting non-attendance in hospital outpatient appointments using Deep Learning Approach

    Get PDF
    The hospital outpatient non-attendance imposes a substantial financial burden on hospitals and roots in multiple diverse reasons. This research aims to build an advanced predictive model for predicting non-attendance regarding the whole spectrum of probable contributing factors to non-attendance that could be collated from heterogeneous sources including electronic patients records and external non-hospital data. We proposed a new non-attendance prediction model based on deep neural networks and machine learning models. The proposed approach works upon sparse stacked denoising autoencoders (SDAEs) to learn the underlying manifold of data and thereby compacting information and providing a better representation that can be utilised afterwards by other learning models as well. The proposed approach is evaluated over real hospital data and compared with several well-known and scalable machine learning models. The evaluation results reveal the proposed approach with softmax layer and logistic regression outperforms other methods in practice

    Novel deep cross-domain framework for fault diagnosis or rotary machinery in prognostics and health management

    Get PDF
    Improving the reliability of engineered systems is a crucial problem in many applications in various engineering fields, such as aerospace, nuclear energy, and water declination industries. This requires efficient and effective system health monitoring methods, including processing and analyzing massive machinery data to detect anomalies and performing diagnosis and prognosis. In recent years, deep learning has been a fast-growing field and has shown promising results for Prognostics and Health Management (PHM) in interpreting condition monitoring signals such as vibration, acoustic emission, and pressure due to its capacity to mine complex representations from raw data. This doctoral research provides a systematic review of state-of-the-art deep learning-based PHM frameworks, an empirical analysis on bearing fault diagnosis benchmarks, and a novel multi-source domain adaptation framework. It emphasizes the most recent trends within the field and presents the benefits and potentials of state-of-the-art deep neural networks for system health management. Besides, the limitations and challenges of the existing technologies are discussed, which leads to opportunities for future research. The empirical study of the benchmarks highlights the evaluation results of the existing models on bearing fault diagnosis benchmark datasets in terms of various performance metrics such as accuracy and training time. The result of the study is very important for comparing or testing new models. A novel multi-source domain adaptation framework for fault diagnosis of rotary machinery is also proposed, which aligns the domains in both feature-level and task-level. The proposed framework transfers the knowledge from multiple labeled source domains into a single unlabeled target domain by reducing the feature distribution discrepancy between the target domain and each source domain. Besides, the model can be easily reduced to a single-source domain adaptation problem. Also, the model can be readily updated to unsupervised domain adaptation problems in other fields such as image classification and image segmentation. Further, the proposed model is modified with a novel conditional weighting mechanism that aligns the class-conditional probability of the domains and reduces the effect of irrelevant source domain which is a critical issue in multi-source domain adaptation algorithms. The experimental verification results show the superiority of the proposed framework over state-of-the-art multi-source domain-adaptation models
    • …
    corecore