19 research outputs found

    On the evolutionary language game in structured and adaptive populations

    No full text
    We propose an evolutionary model for the emergence of shared linguistic convention in a population of agents whose social structure is modelled by complex networks. Through agent-based simulations, we show a process of convergence towards a common language, and explore how the topology of the underlying networks affects its dynamics. We find that small-world effects act to speed up convergence, but observe no effect of topology on the communicative efficiency of common languages. We further explore differences in agent learning, discriminating between scenarios in which new agents learn from their parents (vertical transmission) versus scenarios in which they learn from their neighbors (oblique transmission), finding that vertical transmission results in faster convergence and generally higher communicability. Optimal languages can be formed when parental learning is dominant, but a small amount of neighbor learning is included. As a last point, we illustrate an exclusion effect leading to core-periphery networks in an adaptive networks setting when agents attempt to reconnect towards better communicators in the population

    Language competition and the scale-merit effect in language evolution.

    No full text
    An important phenomenon observed in the evolution of languages in the real world is the scale-merit or bandwagon effect, whereby more popular languages are preferred by speakers for their higher utility, and thereby become even more widespread in the population. As discussed previously, our model includes a bias towards more popular languages, since the payoffs of any individual agent will depend on the frequency of languages represented in its immediate neighbourhood. To showcase this effect, we have conducted a series of simulations of language competition, which proceed as follows. Instead of initializing the population with random languages, we generate two languages A and B, that yield the same payoffs with respect to themselves. These languages are then distributed randomly among the population of agents in given proportions (see first row of table). Reproduction and learning dynamics proceed as normal. The results shown in the table are for random networks, N = 400, δ = 1, and λ = 0. We observe that the proportion of simulation runs that result in A being dominant, i.e. the population reaches a stable state where all agents speak A, increases with the number of A agents in the initial population. Having no other advantage over B, this shows that more numerous languages tend to be more successful in the final population. Additionally, we have shown the average similarity for languages in the population both at the start and end of the simulations. Similarity of a language C to language A is defined as , where H is the Hamming distance between the two languages, and m × n is the maximum Hamming distance given n objects and m signals. The average similarity is calculated using a weighted arithmetic mean over the distribution of languages in the population. We see that similarities remain stable, with already popular languages maintaining popularity or giving rise to similar languages by the end of the simulations. There is no drastic convergence to either of the initial languages, primarily due to the averaging effects and noise introduced by neighbor sampling (δ = 1), which slows down and softens convergence (see main text for further discussion on this point). (TIF)</p

    Differences in final payoffs of languages <i>F</i><sub><i>conv</i></sub> after convergence on different network topologies.

    No full text
    There are no significant differences in final payoffs for different network topologies, except for even-sized lattices. Results are for N = 500 and bars indicate standard errors.</p

    Example of evolutionary dynamics for various realizations of the Monte Carlo simulation and an averaged over 30 runs.

    No full text
    The average payoff FN is shown for both individual runs (blue lines) and their average (orange line). This example is for N = 400 run on a scale-free network, with Fmax = 5 and tmax = 2 Ă— 106.</p

    Scaling of final payoffs <i>F</i><sub><i>conv</i></sub> with population size <i>N</i> on different networks.

    No full text
    No significant difference in payoffs can be observed. See main text for discussion on why this might be the case. (TIF)</p

    Differences in mean convergence time to a common language <i>t</i><sub><i>conv</i></sub> on different network topologies (left, plot) and average shortest path length <i>L</i> for all networks (right, table).

    No full text
    Convergence times are roughly correlated with average shortest path lengths, with the exception of even-sized lattices. Results are for N = 500 and bars indicate standard errors.</p

    Effects of population structure on the evolution of linguistic convention

    No full text
    We define a model for the evolution of linguistic convention in a population of agents embedded on a network, and consider the effects of topology on the population-level language dynamics. Individuals are subject to evolutionary forces that over time result in the adoption of a shared language throughout the population. The differences in convergence time to a common language and that language's communicative efficiency under different underlying social structures and population sizes are examined. We find that shorter average path lengths contribute to a faster convergence and that the final payoff of languages is unaffected by the underlying topology. Compared to models for the emergence of linguistic convention based on self-organization, we find similarities in the effects of average path lengths, but differences in the role of degree heterogeneity

    Evolution of average payoffs <i>F</i><sub><i>N</i></sub> over time on random regular graphs subject to different neighbor influence <i>δ</i>.

    No full text
    Results are similar to those presented for other network structures, as discussed for Fig 8 and S4 Fig. Briefly, for δ = 0 convergence is fast, while for δ = 1 it is much slower and languages have lower average payoffs FN. For δ = 0.5, we see a balance, whereby Fconv is maximized while convergence remains relatively fast. Results are for N = 400 on random 4-regular graphs. (TIF)</p

    Demonstration of a gridlock pattern on 2D regular lattices.

    No full text
    The pattern can either occur as two languages in a checkered pattern on the lattice (left), or as one dominant language distributed in a pattern, and multiple different languages in between (middle). Adding a single edge between any two nodes (right) disturbs the pattern and leads to a convergence similar to that of odd-sized lattices. A lattice with static boundaries is shown here for visualization purposes—periodic boundaries were used in simulations.</p

    Illustration of the convergence dynamics towards a common language.

    No full text
    Each node represents a single agent, and is colored (a) based on the agent’s payoff, with a lighter color implying higher payoff, and (b) based on agents’ languages, with each color representing a distinct language. In the initial generation, all agents are assigned different, randomly generated languages (b1) that are not well-suited for collective communication (a1). Correspondingly, payoffs are similar and low. As the simulation progresses, some languages are adopted by multiple agents (b2), and all languages become more alike, yielding higher payoffs (a2). By the end, all agents adopt the same language (b3), and the payoff of communication is the maximum possible given that language (a3). (Colors between (a) and (b) are not related).</p
    corecore