research

Imitative learning as a connector of collective brains

Abstract

The notion that cooperation can aid a group of agents to solve problems more efficiently than if those agents worked in isolation is prevalent, despite the little quantitative groundwork to support it. Here we consider a primordial form of cooperation -- imitative learning -- that allows an effective exchange of information between agents, which are viewed as the processing units of a social intelligence system or collective brain. In particular, we use agent-based simulations to study the performance of a group of agents in solving a cryptarithmetic problem. An agent can either perform local random moves to explore the solution space of the problem or imitate a model agent -- the best performing agent in its influence network. There is a complex trade-off between the number of agents N and the imitation probability p, and for the optimal balance between these parameters we observe a thirtyfold diminution in the computational cost to find the solution of the cryptarithmetic problem as compared with the independent search. If those parameters are chosen far from the optimal setting, however, then imitative learning can impair greatly the performance of the group. The observed maladaptation of imitative learning for large N offers an alternative explanation for the group size of social animals

    Similar works