2,271 research outputs found
Large-scale games in large-scale systems
Many real-world problems modeled by stochastic games have huge state and/or
action spaces, leading to the well-known curse of dimensionality. The
complexity of the analysis of large-scale systems is dramatically reduced by
exploiting mean field limit and dynamical system viewpoints. Under regularity
assumptions and specific time-scaling techniques, the evolution of the mean
field limit can be expressed in terms of deterministic or stochastic equation
or inclusion (difference or differential). In this paper, we overview recent
advances of large-scale games in large-scale systems. We focus in particular on
population games, stochastic population games and mean field stochastic games.
Considering long-term payoffs, we characterize the mean field systems using
Bellman and Kolmogorov forward equations.Comment: 30 pages. Notes for the tutorial course on mean field stochastic
games, March 201
On Similarities between Inference in Game Theory and Machine Learning
In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)
Mean-Field-Type Games in Engineering
A mean-field-type game is a game in which the instantaneous payoffs and/or
the state dynamics functions involve not only the state and the action profile
but also the joint distributions of state-action pairs. This article presents
some engineering applications of mean-field-type games including road traffic
networks, multi-level building evacuation, millimeter wave wireless
communications, distributed power networks, virus spread over networks, virtual
machine resource management in cloud networks, synchronization of oscillators,
energy-efficient buildings, online meeting and mobile crowdsensing.Comment: 84 pages, 24 figures, 183 references. to appear in AIMS 201
A survey of random processes with reinforcement
The models surveyed include generalized P\'{o}lya urns, reinforced random
walks, interacting urn models, and continuous reinforced processes. Emphasis is
on methods and results, with sketches provided of some proofs. Applications are
discussed in statistics, biology, economics and a number of other areas.Comment: Published at http://dx.doi.org/10.1214/07-PS094 in the Probability
Surveys (http://www.i-journals.org/ps/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Design of switching damping controllers for power systems based on a Markov jump parameter system approach
The application of a new technique, based on the theory of Markov Jump Parameter Systems (MJPS), to the problem of designing controllers to damp power system oscillations is presented in this paper. This problem is very difficult to address, mainly because these controllers are required to have an output feedback decentralized structure. The technique relies on the statistical knowledge about the system operating conditions to provide less conservative controllers than other modern robust control approaches. The influence of the system interconnections over its modes of oscillation is reduced by means of a proper control design formulation involving Integral Quadratic Constraints. The discrete nature of some typical events in power systems (such as line tripping or load switching) is adequately modeled by the MJPS approach, therefore allowing the controller to withstand such abrupt changes in the operating conditions of the system, as shown in the results. © 2006 IEEE
- …