7,121 research outputs found
Emergence of social networks via direct and indirect reciprocity
Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals' degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to
strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores ("indirect reciprocity"), which is known to play an important role in many economic interactions. In
order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. "tit-for-tat") as well as indirect
reciprocity (helping strangers in order to increase one's reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which
are dynamic at the individual level but stable at the network level
Learning and innovative elements of strategy adoption rules expand cooperative network topologies
Cooperation plays a key role in the evolution of complex systems. However,
the level of cooperation extensively varies with the topology of agent networks
in the widely used models of repeated games. Here we show that cooperation
remains rather stable by applying the reinforcement learning strategy adoption
rule, Q-learning on a variety of random, regular, small-word, scale-free and
modular network models in repeated, multi-agent Prisoners Dilemma and Hawk-Dove
games. Furthermore, we found that using the above model systems other long-term
learning strategy adoption rules also promote cooperation, while introducing a
low level of noise (as a model of innovation) to the strategy adoption rules
makes the level of cooperation less dependent on the actual network topology.
Our results demonstrate that long-term learning and random elements in the
strategy adoption rules, when acting together, extend the range of network
topologies enabling the development of cooperation at a wider range of costs
and temptations. These results suggest that a balanced duo of learning and
innovation may help to preserve cooperation during the re-organization of
real-world networks, and may play a prominent role in the evolution of
self-organizing, complex systems.Comment: 14 pages, 3 Figures + a Supplementary Material with 25 pages, 3
Tables, 12 Figures and 116 reference
- …