1,316 research outputs found

    Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability

    Get PDF
    In this paper a combination of neuro-fuzzy classifiers for improved classification performance and reliability is considered. A general fuzzy min-max (GFMM) classifier with agglomerative learning algorithm is used as a main building block. An alternative approach to combining individual classifier decisions involving the combination at the classifier model level is proposed. The resulting classifier complexity and transparency is comparable with classifiers generated during a single crossvalidation procedure while the improved classification performance and reduced variance is comparable to the ensemble of classifiers with combined (averaged/voted) decisions. We also illustrate how combining at the model level can be used for speeding up the training of GFMM classifiers for large data sets

    Ensemble of radial basis neural networks with k-means clustering for heating energy consumption prediction

    Get PDF
    U radu je predložen i prikazan ansambl neuronskih mreža za predviđanje potrošnje toplote univerzitetskog kampusa. Za obučavanje i testiranje modela korišćeni su eksperimentalni podaci. Razmatrano je poboljšanje tačnosti predviđanja primenom k-means metode klasterizacije za generisanje obučavajućih podskupova neuronskih mreža zasnovanih na radijalnim bazisnim funkcijama. Korišćen je različit broj klastera, od 2-5. Izlazi članova ansambla su kombinovani primenom aritmetičkog, težinskog i osrednjavanja metodom medijane. Pokazano je da ansambli neuronskih mreža ostvaruju bolje rezultate predviđanja nego svaka pojedinačna mreža članica ansambla. PR Data used for this paper were gathered during study visit to NTNU, as a part of the collaborative project: Sustainable energy and environment in Western Balkans.For the prediction of heating energy consumption of university campus, neural network ensemble is proposed. Actual measured data are used for training and testing the models. Improvement of the prediction accuracy using k-means clustering for creating subsets used to train individual radial basis function neural networks is examined. Number of clusters is varying from 2 to 5. The outputs of ensemble members are aggregated using simple, weighted and median based averaging. It is shown that ensembles achieve better prediction results than the individual network

    Diversity control for improving the analysis of consensus clustering

    Get PDF
    Consensus clustering has emerged as a powerful technique for obtaining better clustering results, where a set of data partitions (ensemble) are generated, which are then combined to obtain a consolidated solution (consensus partition) that outperforms all of the members of the input set. The diversity of ensemble partitions has been found to be a key aspect for obtaining good results, but the conclusions of previous studies are contradictory. Therefore, ensemble diversity analysis is currently an important issue because there are no methods for smoothly changing the diversity of an ensemble, which makes it very difficult to study the impact of ensemble diversity on consensus results. Indeed, ensembles with similar diversity can have very different properties, thereby producing a consensus function with unpredictable behavior. In this study, we propose a novel method for increasing and decreasing the diversity of data partitions in a smooth manner by adjusting a single parameter, thereby achieving fine-grained control of ensemble diversity. The results obtained using well-known data sets indicate that the proposed method is effective for controlling the dissimilarity among ensemble members to obtain a consensus function with smooth behavior. This method is important for facilitating the analysis of the impact of ensemble diversity in consensus clustering.Fil: Pividori, Milton Damián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentina. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Centro de Investigación y Desarrollo de Ingeniería en Sistemas de Información; ArgentinaFil: Stegmayer, Georgina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin

    Oversampling for Imbalanced Learning Based on K-Means and SMOTE

    Full text link
    Learning from class-imbalanced data continues to be a common and challenging problem in supervised learning as standard classification algorithms are designed to handle balanced class distributions. While different strategies exist to tackle this problem, methods which generate artificial data to achieve a balanced class distribution are more versatile than modifications to the classification algorithm. Such techniques, called oversamplers, modify the training data, allowing any classifier to be used with class-imbalanced datasets. Many algorithms have been proposed for this task, but most are complex and tend to generate unnecessary noise. This work presents a simple and effective oversampling method based on k-means clustering and SMOTE oversampling, which avoids the generation of noise and effectively overcomes imbalances between and within classes. Empirical results of extensive experiments with 71 datasets show that training data oversampled with the proposed method improves classification results. Moreover, k-means SMOTE consistently outperforms other popular oversampling methods. An implementation is made available in the python programming language.Comment: 19 pages, 8 figure

    Modeling Heterogeneous Statistical Patterns in High-dimensional Data by Adversarial Distributions: An Unsupervised Generative Framework

    Full text link
    Since the label collecting is prohibitive and time-consuming, unsupervised methods are preferred in applications such as fraud detection. Meanwhile, such applications usually require modeling the intrinsic clusters in high-dimensional data, which usually displays heterogeneous statistical patterns as the patterns of different clusters may appear in different dimensions. Existing methods propose to model the data clusters on selected dimensions, yet globally omitting any dimension may damage the pattern of certain clusters. To address the above issues, we propose a novel unsupervised generative framework called FIRD, which utilizes adversarial distributions to fit and disentangle the heterogeneous statistical patterns. When applying to discrete spaces, FIRD effectively distinguishes the synchronized fraudsters from normal users. Besides, FIRD also provides superior performance on anomaly detection datasets compared with SOTA anomaly detection methods (over 5% average AUC improvement). The significant experiment results on various datasets verify that the proposed method can better model the heterogeneous statistical patterns in high-dimensional data and benefit downstream applications

    A Hybrid Resampling Approach for Multiclass Skewed Datasets and Experimental Analysis with Diverse Classifier Models

    Get PDF
    In real-life scenarios, imbalanced datasets pose a prevalent challenge for classification tasks, where certain classes are heavily underrepresented compared to others. To combat this issue, this article introduces DOSAKU, a novel hybrid resampling technique that combines the strengths of DOSMOTE and AKCUS algorithms. By integrating both oversampling and undersampling methods, DOSAKU significantly reduces the imbalance ratio of datasets, enhancing the performance of classifiers. The proposed approach is evaluated on multiple models employing different classifiers, and the results demonstrate its superiority over existing resampling measures, making it an effective solution for handling class imbalance challenges. DOSAKU's promising performance is a substantial contribution to the field of imbalanced data classification, as it offers a robust and innovative solution for improving predictive model accuracy and fairness in real-world applications where imbalanced datasets are common

    Landslide Susceptibility Mapping Using the Stacking Ensemble Machine Learning Method in Lushui, Southwest China

    Get PDF
    Landslide susceptibility mapping is considered to be a prerequisite for landslide prevention and mitigation. However, delineating the spatial occurrence pattern of the landslide remains a challenge. This study investigates the potential application of the stacking ensemble learning technique for landslide susceptibility assessment. In particular, support vector machine (SVM), artificial neural network (ANN), logical regression (LR), and naive Bayes (NB) were selected as base learners for the stacking ensemble method. The resampling scheme and Pearson’s correlation analysis were jointly used to evaluate the importance level of these base learners. A total of 388 landslides and 12 conditioning factors in the Lushui area (Southwest China) were used as the dataset to develop landslide modeling. The landslides were randomly separated into two parts, with 70% used for model training and 30% used for model validation. The models’ performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) and statistical measures. The results showed that the stacking-based ensemble model achieved an improved predictive accuracy as compared to the single algorithms, while the SVM-ANN-NB-LR (SANL) model, the SVM-ANN-NB (SAN) model, and the ANN-NB-LR (ANL) models performed equally well, with AUC values of 0.931, 0.940, and 0.932, respectively, for validation stage. The correlation coefficient between the LR and SVM was the highest for all resampling rounds, with a value of 0.72 on average. This connotes that LR and SVM played an almost equal role when the ensemble of SANL was applied for landslide susceptibility analysis. Therefore, it is feasible to use the SAN model or the ANL model for the study area. The finding from this study suggests that the stacking ensemble machine learning method is promising for landslide susceptibility mapping in the Lushui area and is capable of targeting areas prone to landslides

    On the class overlap problem in imbalanced data classification.

    Get PDF
    Class imbalance is an active research area in the machine learning community. However, existing and recent literature showed that class overlap had a higher negative impact on the performance of learning algorithms. This paper provides detailed critical discussion and objective evaluation of class overlap in the context of imbalanced data and its impact on classification accuracy. First, we present a thorough experimental comparison of class overlap and class imbalance. Unlike previous work, our experiment was carried out on the full scale of class overlap and an extreme range of class imbalance degrees. Second, we provide an in-depth critical technical review of existing approaches to handle imbalanced datasets. Existing solutions from selective literature are critically reviewed and categorised as class distribution-based and class overlap-based methods. Emerging techniques and the latest development in this area are also discussed in detail. Experimental results in this paper are consistent with existing literature and show clearly that the performance of the learning algorithm deteriorates across varying degrees of class overlap whereas class imbalance does not always have an effect. The review emphasises the need for further research towards handling class overlap in imbalanced datasets to effectively improve learning algorithms’ performance
    corecore