2,474 research outputs found

    Cognitive Architecture to Analyze the Effect of Intrinsic Motivation with Metacognition over Extrinsic Motivation on Swarm Agents

    Get PDF
    This research work describes the setup of framework for testing the performance of intrinsically motivated swarm agents over extrinsic motivation. The performance is tested through the simulation. The result demonstrates that agents with intrinsic motivation for specific goal have high metacognitive ability. It also shows group performance of agents with metacognitive ability is better than the group of agents with extrinsic motivation exhibiting cognitive ability. Goal setting theory of motivation is applied to the group of agents in order to analyse the intelligent behaviour of the agents. This research is mainly focusing on why and how group performance by swarm agents is better than individuals. This approach requires design of ambient testbed where swarm agents demonstrate cognitive actions to metacognitive actions. This research is aiming to prove that group performance by swarm agents is higher due to type of agents chosen with intrinsic motivation and thus proves intrinsic motivation is better than extrinsic motivation. Agent behaviour in a group can be analysed using different metrics like resource collection, life expectancy, level of motivation and task assigned

    Inverse Reinforcement Learning in Swarm Systems

    Full text link
    Inverse reinforcement learning (IRL) has become a useful tool for learning behavioral models from demonstration data. However, IRL remains mostly unexplored for multi-agent systems. In this paper, we show how the principle of IRL can be extended to homogeneous large-scale problems, inspired by the collective swarming behavior of natural systems. In particular, we make the following contributions to the field: 1) We introduce the swarMDP framework, a sub-class of decentralized partially observable Markov decision processes endowed with a swarm characterization. 2) Exploiting the inherent homogeneity of this framework, we reduce the resulting multi-agent IRL problem to a single-agent one by proving that the agent-specific value functions in this model coincide. 3) To solve the corresponding control problem, we propose a novel heterogeneous learning scheme that is particularly tailored to the swarm setting. Results on two example systems demonstrate that our framework is able to produce meaningful local reward models from which we can replicate the observed global system dynamics.Comment: 9 pages, 8 figures; ### Version 2 ### version accepted at AAMAS 201

    Future state maximisation as an intrinsic motivation for decision making

    Get PDF
    The concept of an “intrinsic motivation" is used in the psychology literature to distinguish between behaviour which is motivated by the expectation of an immediate, quantifiable reward (“extrinsic motivation") and behaviour which arises because it is inherently useful, interesting or enjoyable. Examples of the latter can include curiosity driven behaviour such as exploration and the accumulation of knowledge, as well as developing skills that might not be immediately useful but that have the potential to be re-used in a variety of different future situations. In this thesis, we examine a candidate for an intrinsic motivation with wide-ranging applicability which we refer to as “future state maximisation". Loosely speaking this is the idea that, taking everything else to be equal, decisions should be made so as to maximally keep one's options open, or to give the maximal amount of control over what one can potentially do in the future. Our goal is to study how this principle can be applied in a quantitative manner, as well as identifying examples of systems where doing so could be useful in either explaining or generating behaviour. We consider a number of examples, however our primary application is to a model of collective motion in which we consider a group of agents equipped with simple visual sensors, moving around in two dimensions. In this model, agents aim to make decisions about how to move so as to maximise the amount of control they have over the potential visual states that they can access in the future. We find that with each agent following this simple, low-level motivational principle a swarm spontaneously emerges in which the agents exhibit rich collective behaviour, remaining cohesive and highly-aligned. Remarkably, the emergent swarm also shares a number of features which are observed in real flocks of starlings, including scale free correlations and marginal opacity. We go on to explore how the model can be developed to allow us to manipulate and control the swarm, as well as looking at heuristics which are able to mimic future state maximisation whilst requiring significantly less computation, and so which could plausibly operate under animal cognition

    Horizontal and Vertical Multiple Implementations in a Model of Industrial Districts

    Get PDF
    In this paper we discuss strategies concerning the implementation of an agent-based simulation of complex phenomena. The model we consider accounts for population decomposition and interaction in industrial districts. The approach we follow is twofold: on one hand, we implement progressively more complex models using different approaches (vertical multiple implementations); on the other hand, we replicate the agent-based simulation with different implementations using jESOF, JAS and plain C++ (horizontal multiple implementations). By using both different implementation approaches and a multiple implementation strategy, we highlight the benefits that arise when the same model is implemented on radically different simulation environments, comparing the advantages of multiple modeling implementations. Our findings provide some important suggestions in terms of model validation, showing how models of complex systems tend to be extremely sensitive to implementation details. Finally we point out how statistical techniques may be necessary when comparing different platform implementations of a single model.Replication of Models; Model Validation; Agent-Based Simulation

    Toward Computational Motivation for Multi-Agent Systems and Swarms

    Get PDF
    Motivation is a crucial part of animal and human mental development, fostering competence, autonomy, and open-ended development. Motivational constructs have proved to be an integral part of explaining human and animal behavior. Computer scientists have proposed various computational models of motivation for artificial agents, with the aim of building artificial agents capable of autonomous goal generation. Multi-agent systems and swarm intelligence are natural extensions to the individual agent setting. However, there are only a few works that focus on motivation theories in multi-agent or swarm settings. In this study, we review current computational models of motivation settings, mechanisms, functions and evaluation methods and discuss how we can produce systems with new kinds of functions not possible using individual agents. We describe in detail this open area of research and the major research challenges it holds

    Stigmergic epistemology, stigmergic cognition

    Get PDF
    To know is to cognize, to cognize is to be a culturally bounded, rationality-bounded and environmentally located agent. Knowledge and cognition are thus dual aspects of human sociality. If social epistemology has the formation, acquisition, mediation, transmission and dissemination of knowledge in complex communities of knowers as its subject matter, then its third party character is essentially stigmergic. In its most generic formulation, stigmergy is the phenomenon of indirect communication mediated by modifications of the environment. Extending this notion one might conceive of social stigmergy as the extra-cranial analog of an artificial neural network providing epistemic structure. This paper recommends a stigmergic framework for social epistemology to account for the supposed tension between individual action, wants and beliefs and the social corpora. We also propose that the so-called "extended mind" thesis offers the requisite stigmergic cognitive analog to stigmergic knowledge. Stigmergy as a theory of interaction within complex systems theory is illustrated through an example that runs on a particle swarm optimization algorithm
    corecore