4 research outputs found

    Combination of linear classifiers using score function -- analysis of possible combination strategies

    Full text link
    In this work, we addressed the issue of combining linear classifiers using their score functions. The value of the scoring function depends on the distance from the decision boundary. Two score functions have been tested and four different combination strategies were investigated. During the experimental study, the proposed approach was applied to the heterogeneous ensemble and it was compared to two reference methods -- majority voting and model averaging respectively. The comparison was made in terms of seven different quality criteria. The result shows that combination strategies based on simple average, and trimmed average are the best combination strategies of the geometrical combination

    Combination of linear classifiers using score function -- analysis of possible combination strategies

    Get PDF
    In this work, we addressed the issue of combining linear classifiers using their score functions. The value of the scoring function depends on the distance from the decision boundary. Two score functions have been tested and four different combination strategies were investigated. During the experimental study, the proposed approach was applied to the heterogeneous ensemble and it was compared to two reference methods -- majority voting and model averaging respectively. The comparison was made in terms of seven different quality criteria. The result shows that combination strategies based on simple average, and trimmed average are the best combination strategies of the geometrical combination

    Are coalitions needed when classifiers make decisions?

    Get PDF
    Cooperation and coalitions’ formation are usually the preferred behavior when conflict situation occurs in real life. The question arises: is this approach should also be used when an ensemble of classifiers makes decisions? In this paper different approaches to classification based on dispersed knowledge are analysed and compared. The first group of approaches does not generate coalitions. Each local classifier generate a classification vector based on the local table, and then one of the most popular fusion methods is used (the sum method or the maximum method). In addition, the approach in which the final classification is made by the strongest classifier is analysed. The second group of approaches uses a coalitions creating method. The final classification is generated based on the coalitions’ predictions by using the two, mentioned above, fusion methods. In addition, the approach is analysed in which the final classification is made by the strongest coalition. For both groups of approaches, with and without coalitions, methods based on the maximum correlation and methods based on the covering rules are considered. The main conclusion that is made in this article is as follows. When classifiers generate fair and rational classification vectors, it is better to consider a coalition-based approach and the fusion method that collectively takes into account all vectors generated by classifiers

    Neural Network Used for the Fusion of Predictions Obtained by the K-Nearest Neighbors Algorithm Based on Independent Data Sources

    Get PDF
    The article concerns the problem of classification based on independent data sets-local decision tables. The aim of the paper is to propose a classification model for dispersed data using a modified k-nearest neighbors algorithm and a neural network. A neural network, more specifically a multilayer perceptron, is used to combine the prediction results obtained based on local tables. Prediction results are stored in the measurement level and generated using a modified k-nearest neighbors algorithm. The task of neural networks is to combine these results and provide a common prediction. In the article various structures of neural networks (different number of neurons in the hidden layer) are studied and the results are compared with the results generated by other fusion methods, such as the majority voting, the Borda count method, the sum rule, the method that is based on decision templates and the method that is based on theory of evidence. Based on the obtained results, it was found that the neural network always generates unambiguous decisions, which is a great advantage as most of the other fusion methods generate ties. Moreover, if only unambiguous results were considered, the use of a neural network gives much better results than other fusion methods. If we allow ambiguity, some fusion methods are slightly better, but it is the result of this fact that it is possible to generate few decisions for the test object
    corecore