9 research outputs found

    Fuzzy Play, Matching Devices and Coordination Failures

    Get PDF
    We revisit n-player coordination games with Pareto-ranked Nash equilibria. The novelty is that we introduce fuzzy play and a matching device, where each player does not choose which pure strategy to play, but instead chooses a nonempty subset of his strategy set that he submits to the matching device. The matching device is a very simple one. It only selects a match if possible, and it selects randomly some strategy belonging to the strategy set sent by each player otherwise. That is, it does not impose that the best alternatives are matched. Using the concepts of perfect Nash equilibrium and of trembling-hand perfect rationalizability, we show that players coordinate directly on the Pareto optimal outcome. This implies that they neither use the option of fuzzy play, nor make use of the matching device.

    Fuzzy play, matching devices and coordination failures

    Get PDF

    Fuzzy Play, Matching Devices and Coordination Failures

    Get PDF
    We revisit n-player coordination games with pareto-ranked nash equilibria. As a novelty, we introduce fuzzy play and a matching device. By fuzzy play we mean that each player does not choose which pure strategy to play, but instead chooses a nonempty subset of his strategy set that he submits to the matching device. The matching device is a very simple one. It randomly selects a match if possible, and it selects randomly some strategy belonging to the strategy set sent by each player otherwise. That is, it does not impose that the best alternatives are matched. Using the concepts of perfect nash equilibrium and of trembling-hand perfect rationalizability, we show that players coordinate directly on the pareto optimal outcome. This implies that they neither use the option of fuzzy play, nor make use of the matching device

    Evaluating reinforcement learning for game theory application learning to price airline seats under competition

    No full text
    Applied Game Theory has been criticised for not being able to model real decision making situations. A game's sensitive nature and the difficultly in determining the utility payoff functions make it hard for a decision maker to rely upon any game theoretic results. Therefore the models tend to be simple due to the complexity of solving them (i.e. finding the equilibrium).In recent years, due to the increases of computing power, different computer modelling techniques have been applied in Game Theory. A major example is Artificial Intelligence methods e.g. Genetic Algorithms, Neural Networks and Reinforcement Learning (RL). These techniques allow the modeller to incorporate Game Theory within their models (or simulation) without necessarily knowing the optimal solution. After a warm up period of repeated episodes is run, the model learns to play the game well (though not necessarily optimally). This is a form of simulation-optimization.The objective of the research is to investigate the practical usage of RL within a simple sequential stochastic airline seat pricing game. Different forms of RL are considered and compared to the optimal policy, which is found using standard dynamic programming techniques. The airline game and RL methods displays various interesting phenomena, which are also discussed. For completeness, convergence proofs for the RL algorithms were constructed
    corecore