137,858 research outputs found

    Exploring NK Fitness Landscapes Using Imitative Learning

    Full text link
    The idea that a group of cooperating agents can solve problems more efficiently than when those agents work independently is hardly controversial, despite our obliviousness of the conditions that make cooperation a successful problem solving strategy. Here we investigate the performance of a group of agents in locating the global maxima of NK fitness landscapes with varying degrees of ruggedness. Cooperation is taken into account through imitative learning and the broadcasting of messages informing on the fitness of each agent. We find a trade-off between the group size and the frequency of imitation: for rugged landscapes, too much imitation or too large a group yield a performance poorer than that of independent agents. By decreasing the diversity of the group, imitative learning may lead to duplication of work and hence to a decrease of its effective size. However, when the parameters are set to optimal values the cooperative group substantially outperforms the independent agents

    An Evolutionary Strategy based on Partial Imitation for Solving Optimization Problems

    Full text link
    In this work we introduce an evolutionary strategy to solve combinatorial optimization tasks, i.e. problems characterized by a discrete search space. In particular, we focus on the Traveling Salesman Problem (TSP), i.e. a famous problem whose search space grows exponentially, increasing the number of cities, up to becoming NP-hard. The solutions of the TSP can be codified by arrays of cities, and can be evaluated by fitness, computed according to a cost function (e.g. the length of a path). Our method is based on the evolution of an agent population by means of an imitative mechanism, we define `partial imitation'. In particular, agents receive a random solution and then, interacting among themselves, may imitate the solutions of agents with a higher fitness. Since the imitation mechanism is only partial, agents copy only one entry (randomly chosen) of another array (i.e. solution). In doing so, the population converges towards a shared solution, behaving like a spin system undergoing a cooling process, i.e. driven towards an ordered phase. We highlight that the adopted `partial imitation' mechanism allows the population to generate solutions over time, before reaching the final equilibrium. Results of numerical simulations show that our method is able to find, in a finite time, both optimal and suboptimal solutions, depending on the size of the considered search space.Comment: 18 pages, 6 figure

    Stochastic simulation framework for the Limit Order Book using liquidity motivated agents

    Full text link
    In this paper we develop a new form of agent-based model for limit order books based on heterogeneous trading agents, whose motivations are liquidity driven. These agents are abstractions of real market participants, expressed in a stochastic model framework. We develop an efficient way to perform statistical calibration of the model parameters on Level 2 limit order book data from Chi-X, based on a combination of indirect inference and multi-objective optimisation. We then demonstrate how such an agent-based modelling framework can be of use in testing exchange regulations, as well as informing brokerage decisions and other trading based scenarios

    Imitative learning as a connector of collective brains

    Get PDF
    The notion that cooperation can aid a group of agents to solve problems more efficiently than if those agents worked in isolation is prevalent, despite the little quantitative groundwork to support it. Here we consider a primordial form of cooperation -- imitative learning -- that allows an effective exchange of information between agents, which are viewed as the processing units of a social intelligence system or collective brain. In particular, we use agent-based simulations to study the performance of a group of agents in solving a cryptarithmetic problem. An agent can either perform local random moves to explore the solution space of the problem or imitate a model agent -- the best performing agent in its influence network. There is a complex trade-off between the number of agents N and the imitation probability p, and for the optimal balance between these parameters we observe a thirtyfold diminution in the computational cost to find the solution of the cryptarithmetic problem as compared with the independent search. If those parameters are chosen far from the optimal setting, however, then imitative learning can impair greatly the performance of the group. The observed maladaptation of imitative learning for large N offers an alternative explanation for the group size of social animals
    corecore