1,011 research outputs found

    On the existence of Bayesian Cournot equilibrium

    Get PDF
    We show that even in very simple oligopolies with differential information a (Bayesian) Cournot equilibrium in pure strategies may not exist, or be unique. However, we find sufficient conditions for existence, and for uniqueness, of Cournot equilibrium in a certain class of industries. More general results arise when negative prices are allowed

    The value of public information in a cournot duopoly

    Get PDF
    We derive alternative sufficient conditions for the value of public information to be either positive or negative in a Cournot duopoly where firms technology exhibits constant returns to scale

    THE VALUE OF PUBLIC INFORMATION IN A COURNOT DUOPOLY

    Get PDF
    We derive alternative sufficient conditions for the value of public information to be either positive or negative in a Cournot duopoly where firms technology exhibits constant returns to scale.

    The role of observability in futures markets

    Get PDF
    Allaz (1992) and Allaz and Vila (1993) show that in an oligopolistic industry the introduction of a futures market that operates prior to the spot market induces more competitive outcomes. Hughes and Kao (1997) show that this result presumes that firms’ future positions are perfectly observed, and that when firms’ positions are not observed the Cournot outcome arises. We study an alternative formulation of observability, where the behavior of participants in the futures market is explicitly analyzed, and show that this approach leads to different results. Imperfect observability induces more competitive outcomes than Allaz and Vila’s model.Publicad

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)
    corecore