559,493 research outputs found

    Capturing Social Embeddedness: a constructivist approach

    Get PDF
    A constructivist approach is applied to characterising social embeddedness and to the design of a simulation of social agents which displays the social embedding of agents. Social embeddedness is defined as the extent to which modelling the behaviour of an agent requires the inclusion of the society of agents as a whole. Possible effects of social embedding and ways to check for it are discussed briefly. A model of co-developing agents is exhibited, which is an extension of Brian Arthur's `El Farol Bar' model, but extended to include learning based upon a GP algorithm and the introduction of communication. Some indicators of social embedding are analysed and some possible causes of social embedding are discussed

    Efficient Bayesian Social Learning on Trees

    Get PDF
    We consider a set of agents who are attempting to iteratively learn the 'state of the world' from their neighbors in a social network. Each agent initially receives a noisy observation of the true state of the world. The agents then repeatedly 'vote' and observe the votes of some of their peers, from which they gain more information. The agents' calculations are Bayesian and aim to myopically maximize the expected utility at each iteration. This model, introduced by Gale and Kariv (2003), is a natural approach to learning on networks. However, it has been criticized, chiefly because the agents' decision rule appears to become computationally intractable as the number of iterations advances. For instance, a dynamic programming approach (part of this work) has running time that is exponentially large in \min(n, (d-1)^t), where n is the number of agents. We provide a new algorithm to perform the agents' computations on locally tree-like graphs. Our algorithm uses the dynamic cavity method to drastically reduce computational effort. Let d be the maximum degree and t be the iteration number. The computational effort needed per agent is exponential only in O(td) (note that the number of possible information sets of a neighbor at time t is itself exponential in td). Under appropriate assumptions on the rate of convergence, we deduce that each agent is only required to spend polylogarithmic (in 1/\eps) computational effort to approximately learn the true state of the world with error probability \eps, on regular trees of degree at least five. We provide numerical and other evidence to justify our assumption on convergence rate. We extend our results in various directions, including loopy graphs. Our results indicate efficiency of iterative Bayesian social learning in a wide range of situations, contrary to widely held beliefs.Comment: 11 pages, 1 figure, submitte

    Sustainability Learning in Natural Resource Use and Management

    Get PDF
    Premi a l'excel·lència investigadora. Àmbit de les Ciències Socials. 2008We contribute to the normative discussion on sustainability learning and provide a theoretical integrative framework intended to underlie the main components and interrelations of what learning is required for social learning to become sustainability learning. We demonstrate how this framework has been operationalized in a participatory modeling interface to support processes of natural resource integrated assessment and management. The key modeling components of our view are: structure (S), energy and resources (E), information and knowledge (I), social-ecological change (C), and the size, thresholds, and connections of different social-ecological systems. Our approach attempts to overcome many of the cultural dualisms that exist in the way social and ecological systems are perceived and affect many of the most common definitions of sustainability. Our approach also emphasizes the issue of limits within a total socialecological system and takes a multiscale, agent-based perspective. Sustainability learning is different from social learning insofar as not all of the outcomes of social learning processes necessarily improve what we consider as essential for the long-term sustainability of social-ecological systems, namely, the co-adaptive systemic capacity of agents to anticipate and deal with the unintended, undesired, and irreversible negative effects of development. Hence, the main difference of sustainability learning from social learning is the content of what is learned and the criteria used to assess such content; these are necessarily related to increasing the capacity of agents to manage, in an integrative and organic way, the total social-ecological system of which they form a part. The concept of sustainability learning and the SEIC social-ecological framework can be useful to assess and communicate the effectiveness of multiple agents to halt or reverse the destructive trends affecting the life-support systems upon which all humans depend.Synthesis, part of a Special Feature on Social Learning in Water Resources Managemen

    Foresighted policy gradient reinforcement learning: solving large-scale social dilemmas with rational altruistic punishment

    Get PDF
    Many important and difficult problems can be modeled as “social dilemmas”, like Hardin's Tragedy of the Commons or the classic iterated Prisoner's Dilemma. It is well known that in these problems, it can be rational for self-interested agents to promote and sustain cooperation by altruistically dispensing costly punishment to other agents, thus maximizing their own long-term reward. However, self-interested agents using most current multi-agent reinforcement learning algorithms will not sustain cooperation in social dilemmas: the algorithms do not sufficiently capture the consequences on the agent's reward of the interactions that it has with other agents. Recent more foresighted algorithms specifically account for such expected consequences, and have been shown to work well for the small-scale Prisoner's Dilemma. However, this approach quickly becomes intractable for larger social dilemmas. Here, we advance on this work and develop a “teach/learn” stateless foresighted policy gradient reinforcement learning algorithm that applies to Social Dilemma's with negative, unilateral side-payments, in the from of costly punishment. In this setting, the algorithm allows agents to learn the most rewarding actions to take with respect to both the dilemma (Cooperate/Defect) and the “teaching” of other agent's behavior through the dispensing of punishment. Unlike other algorithms, we show that this approach scales well to large settings like the Tragedy of the Commons. We show for a variety of settings that large groups of self-interested agents using this algorithm will robustly find and sustain cooperation in social dilemmas where adaptive agents can punish the behavior of other similarly adaptive agents

    Genetic Action Trees A New Concept for Social and Economic Simulation

    Get PDF
    Multi-Agent Based Simulation is a branch of Distributed Artificial Intelligence that builds the base for computer simulations which connect the micro and macro level of social and economic scenarios. This paper presents a new method of modelling the formation and change of patterns of action in social systems with the help of Multi-Agent Simulations. The approach is based on two scientific concepts: Genetic Algorithms [Goldberg 1989, Holland 1975] and the theory of Action Trees [Goldman 1971]. Genetic Algorithms were developed following the biological mechanisms of evolution. Action Trees are used in analytic philosophy for the structural description of actions. The theory of Action Trees makes use of the observation of linguistic analysis that through the preposition by a semi-order is induced on a set of actions. Through the application of Genetic Algorithms on the attributes of the actions of an Action Tree an intuitively simple algorithm can be developed with which one can describe the learning behaviour of agents and the changes in action spaces. Using the extremely simplified economic action space, in this paper called “SMALLWORLDâ€, it is shown with the aid of this method how simulated agents react to the qualities and changes of their environment. Thus, one manages to endogenously evoke intuitively comprehensible changes in the agents‘ actions. This way, one can observe in these simulations that the agents move from a barter to a monetary economy because of the higher effectiveness or that they change their behaviour towards actions of fraud.Multi agent system, genetic algorithms, actiontrees, learning, decision making, economic and social behaviour, distributed artificial intelligence
    • …
    corecore