66 research outputs found

    Analysis of the Correlation Between Majority Voting Error and the Diversity Measures in Multiple Classifier Systems

    Get PDF
    Combining classifiers by majority voting (MV) has recently emerged as an effective way of improving performance of individual classifiers. However, the usefulness of applying MV is not always observed and is subject to distribution of classification outputs in a multiple classifier system (MCS). Evaluation of MV errors (MVE) for all combinations of classifiers in MCS is a complex process of exponential complexity. Reduction of this complexity can be achieved provided the explicit relationship between MVE and any other less complex function operating on classifier outputs is found. Diversity measures operating on binary classification outputs (correct/incorrect) are studied in this paper as potential candidates for such functions. Their correlation with MVE, interpreted as the quality of a measure, is thoroughly investigated using artificial and real-world datasets. Moreover, we propose new diversity measure efficiently exploiting information coming from the whole MCS, rather than its part, for which it is applied

    Integrating Specialized Classifiers Based on Continuous Time Markov Chain

    Full text link
    Specialized classifiers, namely those dedicated to a subset of classes, are often adopted in real-world recognition systems. However, integrating such classifiers is nontrivial. Existing methods, e.g. weighted average, usually implicitly assume that all constituents of an ensemble cover the same set of classes. Such methods can produce misleading predictions when used to combine specialized classifiers. This work explores a novel approach. Instead of combining predictions from individual classifiers directly, it first decomposes the predictions into sets of pairwise preferences, treating them as transition channels between classes, and thereon constructs a continuous-time Markov chain, and use the equilibrium distribution of this chain as the final prediction. This way allows us to form a coherent picture over all specialized predictions. On large public datasets, the proposed method obtains considerable improvement compared to mainstream ensemble methods, especially when the classifier coverage is highly unbalanced.Comment: Published at IJCAI-17, typo fixe

    Combination of Acoustic Classifiers Based on Dempster-Shafer Theory of Evidence

    Full text link

    SISTEM MONITORING KUALITAS AIR AKUARIUM MENGGUNAKAN METODE LEARNING VECTOR QUANTIZATION

    Get PDF
    Hobi akuarium merupakan hobi populer yang dapat didukung dengan penggunaan teknologi. Penggunaan teknologi kecerdasan buatan dan internet of things dapat mempermudah aktivitas sehari-hari untuk memantau atau memonitoring alat atau lingkungan, salah satu lingkungan yang dapat dipantau dengan teknologi internet of things adalah kualitas air akuarium. Kualitas air akuarium dapat dipantau melalui parameter pH, temperatur, TDS, dan turbidity. Terdapat klasifikasi manual seperti Indeks Pencemaran (IP), Quality Index (WQI) dan STORET dengan kendala waktu dan biaya yang cukup tinggi. Klasifikasi secara manual atau inferensi akan menyebabkan ketidak efisienan ketika data yang ditambahkan menggunakan parameter yang beragam. Klasifikasi manual dapat digantikan dengan metode klasifikasi otomatis atau menggunakan neural network seperti Learning Vector Quantization (LVQ). Berdasarkan hasil penelitian hardware sudah berhasil mengakuisisi data kemudian disimpan ke database dan ditampilkan di website berikut hasil klasifikasinya. Sistem monitoring melalui pengujian hardware sudah berhasil mengirimkan data antar perangkat dan komunikasi ke website melalui API kemudian uji sensor dilakukan dengan melihat rata-rata error pada pembacaan yang kurang dari 5%. Melalui pengujian blackbox, responsivitas, dan uji penggunaan aplikasi ini sudah memenuhi standar ekspektasi penelitian berikut uji notifikasi dengan hasil pengiriman notifikasi dari website yang dapat digunakan secara fungsi. Penerapan Learning Vector Quantization mampu menghasilkan klasifikasi dengan akurasi sebesar 94%

    Linear and Order Statistics Combiners for Pattern Classification

    Full text link
    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.Comment: 31 page
    • …
    corecore