21,571 research outputs found

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)

    Self-tuning experience weighted attraction learning in games

    Get PDF
    Self-tuning experience weighted attraction (EWA) is a one-parameter theory of learning in games. It addresses a criticism that an earlier model (EWA) has too many parameters, by fixing some parameters at plausible values and replacing others with functions of experience so that they no longer need to be estimated. Consequently, it is econometrically simpler than the popular weighted fictitious play and reinforcement learning models. The functions of experience which replace free parameters “self-tune” over time, adjusting in a way that selects a sensible learning rule to capture subjects’ choice dynamics. For instance, the self-tuning EWA model can turn from a weighted fictitious play into an averaging reinforcement learning as subjects equilibrate and learn to ignore inferior foregone payoffs. The theory was tested on seven different games, and compared to the earlier parametric EWA model and a one-parameter stochastic equilibrium theory (QRE). Self-tuning EWA does as well as EWA in predicting behavior in new games, even though it has fewer parameters, and fits reliably better than the QRE equilibrium benchmark

    The Role of Corporate Image and Extension Similarity in Service Brand Extensions

    Get PDF
    In this article we examine the role of corporate image in extending service brands to new and traditional markets in the telecommunications sector. With regards to corporate image, service brand extensions are primarily associated with innovation-related attributes, such as order of entry (i.e., pioneers versus followers). Increasingly, firms are extending their services to markets that are beyond the markets that they traditionally have been active in. The results of an experimental study show that consumers evaluate service extensions by providers with an innovative late mover image more favourably that service extensions by companies with a pioneer image in terms of perceived corporate credibility and expected service quality. With regards to these evaluation criteria, it was also found that consumers prefer service brand extensions to related rather than unrelated markets. In addition we find that the relative distance between service providers with an innovative late mover image and pioneers is larger in related markets.marketing ;

    Do repeated game players detect patterns in opponents? Revisiting the Nyarko & Schotter belief elicitation experiment

    Get PDF
    The purpose of this paper is to reexamine the seminal belief elicitation experiment by Nyarko and Schotter (2002) under the prism of pattern recognition. Instead of modeling elicited beliefs by a standard weighted fictitious play model this paper proposes a generalized variant of fictitious play that is able to detect two period patterns in opponents’ behavior. Evidence is presented that these generalized pattern detection models provide a better fit than standard weighted fictitious play. Individual heterogeneity was discovered as ten players were classified as employing a two period pattern detection fictitious play model, compared to eleven players who followed a non-pattern detecting fictitious play model. The average estimates of the memory parameter for these classes were 0.678 and 0.456 respectively, with five individual cases where the memory parameter was equal to zero. This is in sharp contrast to the estimates obtained from standard weighted fictitious play models which are centred on one, a bias introduced by the absence of a constant in these models. Non-pattern detecting fictitious play models with memory parameters of zero are equivalent to the win-stay/lose-shift heuristic, and therefore some sub jects seem to be employing a simple heuristic alternative to more complex learning models. Simulations of these various belief formation models show that that this simple heuristic is quite effective against other more complex fictitious play models

    Extending twin support vector machine classifier for multi-category classification problems

    Get PDF
    © 2013 – IOS Press and the authors. All rights reservedTwin support vector machine classifier (TWSVM) was proposed by Jayadeva et al., which was used for binary classification problems. TWSVM not only overcomes the difficulties in handling the problem of exemplar unbalance in binary classification problems, but also it is four times faster in training a classifier than classical support vector machines. This paper proposes one-versus-all twin support vector machine classifiers (OVA-TWSVM) for multi-category classification problems by utilizing the strengths of TWSVM. OVA-TWSVM extends TWSVM to solve k-category classification problems by developing k TWSVM where in the ith TWSVM, we only solve the Quadratic Programming Problems (QPPs) for the ith class, and get the ith nonparallel hyperplane corresponding to the ith class data. OVA-TWSVM uses the well known one-versus-all (OVA) approach to construct a corresponding twin support vector machine classifier. We analyze the efficiency of the OVA-TWSVM theoretically, and perform experiments to test its efficiency on both synthetic data sets and several benchmark data sets from the UCI machine learning repository. Both the theoretical analysis and experimental results demonstrate that OVA-TWSVM can outperform the traditional OVA-SVMs classifier. Further experimental comparisons with other multiclass classifiers demonstrated that comparable performance could be achieved.This work is supported in part by the grant of the Fundamental Research Funds for the Central Universities of GK201102007 in PR China, and is also supported by Natural Science Basis Research Plan in Shaanxi Province of China (Program No.2010JM3004), and is at the same time supported by Chinese Academy of Sciences under the Innovative Group Overseas Partnership Grant as well as Natural Science Foundation of China Major International Joint Research Project (NO.71110107026)
    corecore