1,905 research outputs found
Fashion, Cooperation, and Social Interactions
Fashion plays such a crucial rule in the evolution of culture and society
that it is regarded as a second nature to the human being. Also, its impact on
economy is quite nontrivial. On what is fashionable, interestingly, there are
two viewpoints that are both extremely widespread but almost opposite:
conformists think that what is popular is fashionable, while rebels believe
that being different is the essence. Fashion color is fashionable in the first
sense, and Lady Gaga in the second. We investigate a model where the population
consists of the afore-mentioned two groups of people that are located on social
networks (a spatial cellular automata network and small-world networks). This
model captures two fundamental kinds of social interactions (coordination and
anti-coordination) simultaneously, and also has its own interest to game
theory: it is a hybrid model of pure competition and pure cooperation. This is
true because when a conformist meets a rebel, they play the zero sum matching
pennies game, which is pure competition. When two conformists (rebels) meet,
they play the (anti-) coordination game, which is pure cooperation. Simulation
shows that simple social interactions greatly promote cooperation: in most
cases people can reach an extraordinarily high level of cooperation, through a
selfish, myopic, naive, and local interacting dynamic (the best response
dynamic). We find that degree of synchronization also plays a critical role,
but mostly on the negative side. Four indices, namely cooperation degree,
average satisfaction degree, equilibrium ratio and complete ratio, are defined
and applied to measure people's cooperation levels from various angles. Phase
transition, as well as emergence of many interesting geographic patterns in the
cellular automata network, is also observed.Comment: 21 pages, 12 figure
Stability and Diversity in Collective Adaptation
We derive a class of macroscopic differential equations that describe
collective adaptation, starting from a discrete-time stochastic microscopic
model. The behavior of each agent is a dynamic balance between adaptation that
locally achieves the best action and memory loss that leads to randomized
behavior. We show that, although individual agents interact with their
environment and other agents in a purely self-interested way, macroscopic
behavior can be interpreted as game dynamics. Application to several familiar,
explicit game interactions shows that the adaptation dynamics exhibits a
diversity of collective behaviors. The simplicity of the assumptions underlying
the macroscopic equations suggests that these behaviors should be expected
broadly in collective adaptation. We also analyze the adaptation dynamics from
an information-theoretic viewpoint and discuss self-organization induced by
information flux between agents, giving a novel view of collective adaptation.Comment: 22 pages, 23 figures; updated references, corrected typos, changed
conten
Adaptive Dynamics for Interacting Markovian Processes
Dynamics of information flow in adaptively interacting stochastic processes
is studied. We give an extended form of game dynamics for Markovian processes
and study its behavior to observe information flow through the system. Examples
of the adaptive dynamics for two stochastic processes interacting through
matching pennies game interaction are exhibited along with underlying causal
structure
Learning with Opponent-Learning Awareness
Multi-agent settings are quickly gathering importance in machine learning.
This includes a plethora of recent work on deep multi-agent reinforcement
learning, but also can be extended to hierarchical RL, generative adversarial
networks and decentralised optimisation. In all these settings the presence of
multiple learning agents renders the training problem non-stationary and often
leads to unstable training or undesired final results. We present Learning with
Opponent-Learning Awareness (LOLA), a method in which each agent shapes the
anticipated learning of the other agents in the environment. The LOLA learning
rule includes a term that accounts for the impact of one agent's policy on the
anticipated parameter update of the other agents. Results show that the
encounter of two LOLA agents leads to the emergence of tit-for-tat and
therefore cooperation in the iterated prisoners' dilemma, while independent
learning does not. In this domain, LOLA also receives higher payouts compared
to a naive learner, and is robust against exploitation by higher order
gradient-based methods. Applied to repeated matching pennies, LOLA agents
converge to the Nash equilibrium. In a round robin tournament we show that LOLA
agents successfully shape the learning of a range of multi-agent learning
algorithms from literature, resulting in the highest average returns on the
IPD. We also show that the LOLA update rule can be efficiently calculated using
an extension of the policy gradient estimator, making the method suitable for
model-free RL. The method thus scales to large parameter and input spaces and
nonlinear function approximators. We apply LOLA to a grid world task with an
embedded social dilemma using recurrent policies and opponent modelling. By
explicitly considering the learning of the other agent, LOLA agents learn to
cooperate out of self-interest. The code is at github.com/alshedivat/lola
Neural networks playing ‘matching pennies’ with each other: reproducibility of game dynamics
Reflection is an essential feature of consciousness and possibly the single most important one. This fact allows us to simplify the objective of the concept of ‘neural correlates of consciousness’ and to focus investigations on reflection itself. Reflexive games are the concentrated and pure embodiment of reflection manifestation without the addition of other higher cognitive functions. In this paper, we use the game ‘matching pennies’ ("Odd-Even") in order to trace the strategies and possible patterns of recurrent neural network operation. Experimental results show the splitting of all considered game patterns into two groups. A significant difference was observed in these groups of patterns, indicating a qualitative difference in game dynamics apparently due to the qualitatively different dynamic patterns of neuron excitations of the networks. A similar splitting of all players into two groups was found by other authors for human players, which differ in terms of the reflection availability. By this, we can assume that one of the causes of the splitting is that the presence of reflection in a particular group of recurrent neural networks dramatically changes the game meta-strategy
- …