1,240 research outputs found

    Extracting Features from Textual Data in Class Imbalance Problems

    Full text link
    [EN] We address class imbalance problems. These are classification problems where the target variable is binary, and one class dominates over the other. A central objective in these problems is to identify features that yield models with high precision/recall values, the standard yardsticks for assessing such models. Our features are extracted from the textual data inherent in such problems. We use n-gram frequencies as features and introduce a discrepancy score that measures the efficacy of an n-gram in highlighting the minority class. The frequency counts of n-grams with the highest discrepancy scores are used as features to construct models with the desired metrics. According to the best practices followed by the services industry, many customer support tickets will get audited and tagged as contract-compliant whereas some will be tagged as over-delivered . Based on in-field data, we use a random forest classifier and perform a randomized grid search over the model hyperparameters. The model scoring is performed using an scoring function. Our objective is to minimize the follow-up costs by optimizing the recall score while maintaining a base-level precision score. The final optimized model achieves an acceptable recall score while staying above the target precision. We validate our feature selection method by comparing our model with one constructed using frequency counts of n-grams chosen randomly. We propose extensions of our feature extraction method to general classification (binary and multi-class) and regression problems. The discrepancy score is one measure of dissimilarity of distributions and other (more general) measures that we formulate could potentially yield more effective models.Aravamuthan, S.; Jogalekar, P.; Lee, J. (2022). Extracting Features from Textual Data in Class Imbalance Problems. Journal of Computer-Assisted Linguistic Research. 6:42-58. https://doi.org/10.4995/jclr.2022.182004258

    Understanding the apparent superiority of over-sampling through an analysis of local information for class-imbalanced data

    Get PDF
    Data plays a key role in the design of expert and intelligent systems and therefore, data preprocessing appears to be a critical step to produce high-quality data and build accurate machine learning models. Over the past decades, increasing attention has been paid towards the issue of class imbalance and this is now a research hotspot in a variety of fields. Although the resampling methods, either by under-sampling the majority class or by over-sampling the minority class, stand among the most powerful techniques to face this problem, their strengths and weaknesses have typically been discussed based only on the class imbalance ratio. However, several questions remain open and need further exploration. For instance, the subtle differences in performance between the over- and under-sampling algorithms are still under-comprehended, and we hypothesize that they could be better explained by analyzing the inner structure of the data sets. Consequently, this paper attempts to investigate and illustrate the effects of the resampling methods on the inner structure of a data set by exploiting local neighborhood information, identifying the sample types in both classes and analyzing their distribution in each resampled set. Experimental results indicate that the resampling methods that produce the highest proportion of safe samples and the lowest proportion of unsafe samples correspond to those with the highest overall performance. The significance of this paper lies in the fact that our findings may contribute to gain a better understanding of how these techniques perform on class-imbalanced data and why over-sampling has been reported to be usually more efficient than under-sampling. The outcomes in this study may have impact on both research and practice in the design of expert and intelligent systems since a priori knowledge about the internal structure of the imbalanced data sets could be incorporated to the learning algorithms

    Kombinasi Synthetic Minority Oversampling Technique (SMOTE) dan Neural Network Backpropagation untuk menangani data tidak seimbang pada prediksi pemakaian alat kontrasepsi implan

    Get PDF
    Combination of Synthetic Minority Oversampling Technique (SMOTE) and Backpropagation Neural Network to handle imbalanced class in predicting the use of contraceptive implants  Kegagalan akibat pemakaian alat kontrasepsi implan merupakan terjadinya kehamilan pada wanita saat menggunakan alat kontrasepsi secara benar. Kegagalan pemakaian kontrasepsi implan tahun 2018 secara nasional sejumlah 1.852 pengguna atau 4% dari 41.947 pengguna. Rasio angka kegagalan dan keberhasilan pemakaian kontrasepsi implan yang cenderung tidak seimbang (imbalance class) membuatnya sulit diprediksi. Ketidakseimbangan data terjadi jika jumlah data suatu kelas lebih banyak dari data lain. Kelas mayor merupakan jumlah data yang lebih banyak, sedangkan kelas minor jumlahnya lebih sedikit. Algoritma klasifikasi akan mengalami penurunan performa jika menghadapi kelas yang tidak seimbang. Synthetic Minority Oversampling Technique (SMOTE) digunakan untuk menyeimbangkan data kegagalan pemakaian kontrasepsi implan. SMOTE menghasilkan akurasi yang baik dan efektif daripada metode oversampling lainnya dalam menangani imbalance class karena mengurangi overfitting. Data yang sudah seimbang kemudian diprediksi dengan Neural Network Backpropagation. Sistem prediksi ini digunakan untuk mendeteksi apakah seorang wanita mengalami kehamilan atau tidak jika menggunakan kontrasepsi implan. Penelitian ini menggunakan 300 data, terdiri dari 285 data mayor (tidak hamil) dan 15 data minor (hamil). Dari 300 data dibagi menjadi dua bagian, 270 data latih dan 30 data uji. Dari 270 data latih, terdapat 13 data latih minor dan 257 data latih mayor. Data latih minor pada data latih diduplikasi sebanyak data pada kelas mayor sehingga jumlah data latih menjadi 514, terdiri dari 257 data mayor, 13 data minor asli, dan 244 data minor buatan. Sistem prediksi menghasilkan nilai akurasi sebesar 96,1% pada epoch ke-500 dan 1.000. Implementasi kombinasi SMOTE dan Neural Network Backpropagation terbukti mampu memprediksi pada imbalance class dengan hasil prediksi yang baik.  The failed contraceptive implant is one of the sources of unintended pregnancy in women. The number of users experiencing contraceptive-implant failure in 2018 was 1,852 nationally or 4% out of 41,947 users. The ratio between failure and success rates of contraceptive implant, which tended to be unbalanced (imbalance class), made it difficult to predict. Imbalance class will occur if the amount of data in one class is bigger than that in other classes. Major classes represent a bigger amount of data, while minor classes are smaller ones. The imbalance class will decrease the performance of the classification algorithm. The Synthetic Minority Oversampling Technique (SMOTE) was used to balance the data of the contraceptive implant failures. SMOTE resulted in better and more effective accuracy than other oversampling methods in handling the imbalance class because it reduced overfitting. The balanced data were then predicted using backpropagation neural networks. The prediction system was used to detect if a woman using a contraceptive implant was pregnant or not. This study used 300 data, consisting of 285 major data (not pregnant) and 15 minor data (pregnant). Of 300 data, two groups of data were formed: 270 training data and 30 testing data. Of 270 training data, 13 were minor training data and 257 were major training data. The minor training data in the training data were duplicated as much as the number of data in major classes so that the total training data became 514, consisting of 257 major data, 13 original minor data, and 244 artificial minor data. The prediction system resulted in an accuracy of 96.1% on the 500th and 1,000th epochs. The combination of SMOTE and Backpropagation Neural Network was proven to be able to make a good prediction result in imbalance class

    The Effect of Dual Hyperparameter Optimization on Software Vulnerability Prediction Models

    Get PDF
    Background: Prediction of software vulnerabilities is a major concern in the field of software security. Many researchers have worked to construct various software vulnerability prediction (SVP) models. The emerging machine learning domain aids in building effective SVP models. The employment of data balancing/resampling techniques and optimal hyperparameters can upgrade their performance. Previous research studies have shown the impact of hyperparameter optimization (HPO) on machine learning algorithms and data balancing techniques. Aim: The current study aims to analyze the impact of dual hyperparameter optimization on metrics-based SVP models. Method: This paper has proposed the methodology using the python framework Optuna that optimizes the hyperparameters for both machine learners and data balancing techniques. For the experimentation purpose, we have compared six combinations of five machine learners and five resampling techniques considering default parameters and optimized hyperparameters. Results: Additionally, the Wilcoxon signed-rank test with the Bonferroni correction method was implied, and observed that dual HPO performs better than HPO on learners and HPO on data balancers. Furthermore, the paper has assessed the impact of data complexity measures and concludes that HPO does not improve the performance of those datasets that exhibit high overlap. Conclusion: The experimental analysis unveils that dual HPO is 64% effective in enhancing the productivity of SVP models

    The impact of parameter optimization of ensemble learning on defect prediction

    Get PDF
    Machine learning algorithms have configurable parameters which are generally used with default settings by practitioners. Making modifications on the parameters of machine learning algorithm is called hyperparameter optimization (HO) performed to find out the most suitable parameter setting in classification experiments. Such studies propose either using default classification model or optimal parameter configuration. This work investigates the effects of applying HO on ensemble learning algorithms in terms of defect prediction performance. Further, this paper presents a new ensemble learning algorithm called novelEnsemble for defect prediction data sets. The method has been tested on 27 data sets. Proposed method is then compared with three alternatives. Welch's Heteroscedastic F Test is used to examine the difference between performance parameters. To control the magnitude of the difference, Cliff's Delta is applied on the results of comparison algorithms. According to the results of the experiment: 1) Ensemble methods featuring HO performs better than a single predictor; 2) Despite the error of triTraining decreases linearly, it produces errors at an unacceptable level; 3) novelEnsemble yields promising results especially in terms of area under the curve (AUC) and Matthews Correlation Coefficient (MCC); 4) HO is not stagnant depending on the scale of the data set; 5) Each ensemble learning approach may not create a favorable effect on HO. To demonstrate the prominence of hyperparameter selection process, the experiment is validated with suitable statistical analyzes. The study revealed that the success of HO which is, contrary to expectations, not depended on the type of the classifiers but rather on the design of ensemble learners

    A new framework in improving prediction of class imbalance for student performance in Oman educational dataset using clustering based sampling techniques

    Get PDF
    According to the Oman Education Portal (OEP), data set imbalances are common in student performance. Most of the students are performing welI, while only small cases of students are underperformed. Classification techniques for the imbalanced dataset can yield deceivingly high prediction accuracy. The majority class usually drives the overall predictive accuracy at the expense of having abysmal performance on the minority class. The main objective of this study was to predict students' performance which consisted of imbalanced class distribution, by exploiting different sampling techniques and several data mining classifier models. Three main sampling techniques - synthetic minority over-sampling technique (SMOTE), random under-sampling (RUS), and clustering-based sampling were compared to improve the predictive accuracy in the minority class while maintaining satisfactory overall classification performance. Five different data-mining classifiers - J48, Random Forest, K-Nearest Neighbour, Naïve Bayes, and Logistic Regression were used to predict the student performance. 10-fold cross-validation was utilized to minimize the sampling bias. The classifiers' performance was evaluated using four metrics: accuracy, False Positive (FP), Matthews correlation coefficient (MCC), and Receiver Operating Characteristic (ROC). The OEP datasets between 2018 and 2019 were extracted to assess the efficacy of both sampling techniques and classification methods. The results indicated that the K-Nearest Neighbors combined with the clustering-based sampling technique produced the best classification performance with an MCC value of 98.4% on the 10-fold crossvalidation. The clustering-based sampling techniques improved the overall prediction performance for the minority class. In addition, the most important variables to accurately predict student performance were identified by utilizing the Random Forest model. OEP contains a large amount of data and analyses based on this large and complex data can be useful for OEP stakeholders in improving student performance and identifying students who require additional attention

    Application of deep learning methods in materials microscopy for the quality assessment of lithium-ion batteries and sintered NdFeB magnets

    Get PDF
    Die Qualitätskontrolle konzentriert sich auf die Erkennung von Produktfehlern und die Überwachung von Aktivitäten, um zu überprüfen, ob die Produkte den gewünschten Qualitätsstandard erfüllen. Viele Ansätze für die Qualitätskontrolle verwenden spezialisierte Bildverarbeitungssoftware, die auf manuell entwickelten Merkmalen basiert, die von Fachleuten entwickelt wurden, um Objekte zu erkennen und Bilder zu analysieren. Diese Modelle sind jedoch mühsam, kostspielig in der Entwicklung und schwer zu pflegen, während die erstellte Lösung oft spröde ist und für leicht unterschiedliche Anwendungsfälle erhebliche Anpassungen erfordert. Aus diesen Gründen wird die Qualitätskontrolle in der Industrie immer noch häufig manuell durchgeführt, was zeitaufwändig und fehleranfällig ist. Daher schlagen wir einen allgemeineren datengesteuerten Ansatz vor, der auf den jüngsten Fortschritten in der Computer-Vision-Technologie basiert und Faltungsneuronale Netze verwendet, um repräsentative Merkmale direkt aus den Daten zu lernen. Während herkömmliche Methoden handgefertigte Merkmale verwenden, um einzelne Objekte zu erkennen, lernen Deep-Learning-Ansätze verallgemeinerbare Merkmale direkt aus den Trainingsproben, um verschiedene Objekte zu erkennen. In dieser Dissertation werden Modelle und Techniken für die automatisierte Erkennung von Defekten in lichtmikroskopischen Bildern von materialografisch präparierten Schnitten entwickelt. Wir entwickeln Modelle zur Defekterkennung, die sich grob in überwachte und unüberwachte Deep-Learning-Techniken einteilen lassen. Insbesondere werden verschiedene überwachte Deep-Learning-Modelle zur Erkennung von Defekten in der Mikrostruktur von Lithium-Ionen-Batterien entwickelt, von binären Klassifizierungsmodellen, die auf einem Sliding-Window-Ansatz mit begrenzten Trainingsdaten basieren, bis hin zu komplexen Defekterkennungs- und Lokalisierungsmodellen, die auf ein- und zweistufigen Detektoren basieren. Unser endgültiges Modell kann mehrere Klassen von Defekten in großen Mikroskopiebildern mit hoher Genauigkeit und nahezu in Echtzeit erkennen und lokalisieren. Das erfolgreiche Trainieren von überwachten Deep-Learning-Modellen erfordert jedoch in der Regel eine ausreichend große Menge an markierten Trainingsbeispielen, die oft nicht ohne weiteres verfügbar sind und deren Beschaffung sehr kostspielig sein kann. Daher schlagen wir zwei Ansätze vor, die auf unbeaufsichtigtem Deep Learning zur Erkennung von Anomalien in der Mikrostruktur von gesinterten NdFeB-Magneten basieren, ohne dass markierte Trainingsdaten benötigt werden. Die Modelle sind in der Lage, Defekte zu erkennen, indem sie aus den Trainingsdaten indikative Merkmale von nur "normalen" Mikrostrukturmustern lernen. Wir zeigen experimentelle Ergebnisse der vorgeschlagenen Fehlererkennungssysteme, indem wir eine Qualitätsbewertung an kommerziellen Proben von Lithium-Ionen-Batterien und gesinterten NdFeB-Magneten durchführen
    corecore