2 research outputs found

    Run-Time Performance Analysis of the Mixture of Experts Model

    No full text
    The Mixture of Experts (ME) model is one of the most popular ensemble methods used in pattern recognition and machine learning. Despite many studies on the theory and application of the ME model, to our knowledge, its training, testing, and evaluation costs have not been investigated yet. After analyzing the ME model in terms of number of required floating point operations, this paper makes an experimental comparison between the ME model and the recently proposed Mixture of Random Prototype Experts. Experiments have been performed on selected datasets from the UCI machine learning repository. Experimental results confirm the expected behavior of the two ME models, while highlighting that the latter performs better in terms of accuracy and run-time performance
    corecore