3 research outputs found

    On the advantages of non-cooperative behavior in agent populations

    Get PDF
    We investigate the amount of cooperation between agents in a population during reward collection that is required to minimize the overall collection time. In our computer simulation agents have the option to broadcast the position of a reward to neighboring agents with a normally distributed certainty. We modify the standard deviation of this certainty to investigate its optimum setting for a varying number of agents and rewards. Results reveal that an optimum exists and that (a) the collection time and the number of agents and (b) the collection time and the number of rewards, follow a power law relationship under optimum conditions. We suggest that the standard deviation can be self-tuned via a feedback loop and list some examples from nature were we believe this self-tuning to take place

    Teleonomic Creativity: First Insights

    Get PDF
    We extend the scope of creativity from the traditional realm of the human mind - a goal-seeking (teleological) system - to all end-directed (teleonomic) systems. Using the simple metaphor of an agent exploiting local hills and exploring global hills on a fitness landscape, we describe commonalities between (pseudo-) serendipity, humour, mistakes, (bordering) madness and analogy making-phenomena often associated with creativity. We suggest that they are observed characteristics of a single process that we call teleonomic creativity

    On the Advantages of Non-Cooperative Behavior in Agent Populations

    No full text
    We investigate the amount of cooperation between agents in a population during reward collection that is required to minimize the overall collection time. In our computer simulation agents have the option to broadcast the position of a reward to neighboring agents with a normally distributed certainty. We modify the standard deviation of this certainty to investigate its optimum setting for a varying number of agents and rewards. Results reveal that an optimum exists and that (a) the collection time and the number of agents and (b) the collection time and the number of rewards, follow a power law relationship under optimum conditions. We suggest that the standard deviation can be self-tuned via a feedback loop and list some examples from nature were we believe this self-tuning to take place
    corecore