2 research outputs found

    Neural Network Used for the Fusion of Predictions Obtained by the K-Nearest Neighbors Algorithm Based on Independent Data Sources

    Get PDF
    The article concerns the problem of classification based on independent data sets-local decision tables. The aim of the paper is to propose a classification model for dispersed data using a modified k-nearest neighbors algorithm and a neural network. A neural network, more specifically a multilayer perceptron, is used to combine the prediction results obtained based on local tables. Prediction results are stored in the measurement level and generated using a modified k-nearest neighbors algorithm. The task of neural networks is to combine these results and provide a common prediction. In the article various structures of neural networks (different number of neurons in the hidden layer) are studied and the results are compared with the results generated by other fusion methods, such as the majority voting, the Borda count method, the sum rule, the method that is based on decision templates and the method that is based on theory of evidence. Based on the obtained results, it was found that the neural network always generates unambiguous decisions, which is a great advantage as most of the other fusion methods generate ties. Moreover, if only unambiguous results were considered, the use of a neural network gives much better results than other fusion methods. If we allow ambiguity, some fusion methods are slightly better, but it is the result of this fact that it is possible to generate few decisions for the test object

    Integration and Selection of Linear SVM Classifiers in Geometric Space

    Get PDF
    Integration or fusion of the base classifiers is the final stage of creating multiple classifiers system. Known methods in this step use base classifier outputs, which are class labels or values of the confidence (predicted probabilities) for each class label. In this paper we propose an integration process which takes place in the geometric space. It means that the fusion of base classifiers is done using their decision boundaries. In order to obtain one decision boundary from boundaries defined by base classifiers the median or weighted average method will be used. In addition, the proposed algorithm uses the division of the entire feature space into disjoint regions of competence as well as the process of selection of base classifiers is carried out. The aim of the experiments was to compare the proposed algorithms with the majority voting method and assessment which of the analyzed approaches to integration of the base classifiers creates a more effective ensemble
    corecore