2,082 research outputs found

    Isoelastic Agents and Wealth Updates in Machine Learning Markets

    Get PDF
    Recently, prediction markets have shown considerable promise for developing flexible mechanisms for machine learning. In this paper, agents with isoelastic utilities are considered. It is shown that the costs associated with homogeneous markets of agents with isoelastic utilities produce equilibrium prices corresponding to alpha-mixtures, with a particular form of mixing component relating to each agent's wealth. We also demonstrate that wealth accumulation for logarithmic and other isoelastic agents (through payoffs on prediction of training targets) can implement both Bayesian model updates and mixture weight updates by imposing different market payoff structures. An iterative algorithm is given for market equilibrium computation. We demonstrate that inhomogeneous markets of agents with isoelastic utilities outperform state of the art aggregate classifiers such as random forests, as well as single classifiers (neural networks, decision trees) on a number of machine learning benchmarks, and show that isoelastic combination methods are generally better than their logarithmic counterparts.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012

    Neural Network Configurations Analysis for Multilevel Speech Pattern Recognition System with Mixture of Experts

    Get PDF
    This chapter proposes to analyze two configurations of neural networks to compose the expert set in the development of a multilevel speech signal pattern recognition system of 30 commands in the Brazilian Portuguese language. Then, multilayer perceptron (MLP) and learning vector quantization (LVQ) networks have their performances verified during the training, validation and test stages in the speech signal recognition, whose patterns are given by two-dimensional time matrices, result from mel-cepstral coefficients coding by the discrete cosine transform (DCT). In order to avoid the pattern separability problem, the patterns are modified by a nonlinear transformation to a high-dimensional space through a suitable set of Gaussian radial base functions (GRBF). The performance of MLP and LVQ experts is improved and configurations are trained with few examples of each modified pattern. Several combinations were performed for the neural network topologies and algorithms previously established to determine the network structures with the best hit and generalization results

    The Superiority of the Ensemble Classification Methods: A Comprehensive Review

    Get PDF
    The modern technologies, which are characterized by cyber-physical systems and internet of things expose organizations to big data, which in turn can be processed to derive actionable knowledge. Machine learning techniques have vastly been employed in both supervised and unsupervised environments in an effort to develop systems that are capable of making feasible decisions in light of past data. In order to enhance the accuracy of supervised learning algorithms, various classification-based ensemble methods have been developed. Herein, we review the superiority exhibited by ensemble learning algorithms based on the past that has been carried out over the years. Moreover, we proceed to compare and discuss the common classification-based ensemble methods, with an emphasis on the boosting and bagging ensemble-learning models. We conclude by out setting the superiority of the ensemble learning models over individual base learners. Keywords: Ensemble, supervised learning, Ensemble model, AdaBoost, Bagging, Randomization, Boosting, Strong learner, Weak learner, classifier fusion, classifier selection, Classifier combination. DOI: 10.7176/JIEA/9-5-05 Publication date: August 31st 2019

    Multiclass Alpha Integration of Scores from Multiple Classifiers

    Get PDF
    [EN] Alpha integration methods have been used for integrating stochastic models and fusion in the context of detection (binary classification). Our work proposes separated score integration (SSI), a new method based on alpha integration to perform soft fusion of scores in multiclass classification problems, one of the most common problems in automatic classification. Theoretical derivation is presented to optimize the parameters of this method to achieve the least mean squared error (LMSE) or the mínimum probability of error (MPE). The proposed alpha integrationmethod was tested on several sets of simulated and real data. The first set of experiments used synthetic data to replicate a problem of automatic detection and classification of three types of ultrasonic pulses buried in noise (four-class classification). The second set of experiments analyzed two databases (one publicly available and one private) of real polysomnographic records from subjects with sleep disorders. These records were automatically staged in wake, rapid eye movement (REM) sleep, and non-REM sleep (three-class classification). Finally, the third set of experiments was performed on a publicly available database of single-channel real electroencephalographic data that included epileptic patients and healthy controls in five conditions (five-class classification). In all cases, alpha integration performed better than the considered single classifiers and classical fusion techniques.This work was supported by the Spanish Administration and European Union under grant TEC2017-84743-P and Generalitat Valenciana under grant PROMETEO II/2014/032.Safont Armero, G.; Salazar Afanador, A.; Vergara Domínguez, L. (2019). Multiclass Alpha Integration of Scores from Multiple Classifiers. Neural Computation. 31(4):806-825. https://doi.org/10.1162/neco_a_01169S80682531
    • …
    corecore