2,676 research outputs found

    The emergence of social inequality: a co-evolutionary analysis.

    Get PDF
    Social inequality is an important issue at the core of economic thought. Based on a stylized pie-sharing game, this article proposes a co-evolutionary computational model to study the interaction between social classes, aiming to answer the following questions: Why do social classes exist, and how do they affect social efficiency and individual effectiveness? Why are wealth inequality and social exclusion persistent? From a methodological perspective, this article extends the pie-sharing game to include a network of interactions and classes, social mobility, evolving wealth, and learning agents. The results show that social inequality is self-emergent. Surprisingly, in the simulations, the existence of social classes increases individual effectiveness, mainly benefiting the poor. Nonetheless, flatter societies (where social classes exist) have a higher average individual effectiveness. As expected, it was observed that wealth inequality is persistent in hierarchical societies, and the upper classes keep a higher proportion of wealth. Furthermore, the analysis extends the bargaining game to include social mobility, showing that, surprisingly, it increases the robustness of the class system and wealth inequality. Finally, the simulation of dual societies shows that these are an evolutionary equilibrium: social exclusion is persistent and accepted by wealthy and poor individuals, and resources are efficiently used

    Coordination in Networks Formation: Experimental Evidence on Learning and Salience

    Get PDF
    We present experiments on repeated non-cooperative network formation games, based on Bala and Goyal (2000). We treat the one-way and the two-ways flow models, each for high and low link costs. The models show both multiple equilibria and coordination problems. We conduct experiments under various conditions which control for salient labeling and learning dynamics. Contrary to previous experiments, we find that coordination on non-empty Strict Nash equilibria is not an easy task for subjects to achieve, even in the mono-directional model where the Strict Nash equilibria is a wheel. We find that salience significantly helps coordination, but only when subjects are pre-instructed to think of the wheel network as a reasonable way to play the networking game. Evidence on learning behavior provides support for subjects choosing strategies consistent with various learning rules, which include as the main ones Reinforcement and Fictitious Play.Experiments, Networks, Behavioral game theory, Salience, Learning dynamics

    Human-Agent Decision-making: Combining Theory and Practice

    Full text link
    Extensive work has been conducted both in game theory and logic to model strategic interaction. An important question is whether we can use these theories to design agents for interacting with people? On the one hand, they provide a formal design specification for agent strategies. On the other hand, people do not necessarily adhere to playing in accordance with these strategies, and their behavior is affected by a multitude of social and psychological factors. In this paper we will consider the question of whether strategies implied by theories of strategic behavior can be used by automated agents that interact proficiently with people. We will focus on automated agents that we built that need to interact with people in two negotiation settings: bargaining and deliberation. For bargaining we will study game-theory based equilibrium agents and for argumentation we will discuss logic-based argumentation theory. We will also consider security games and persuasion games and will discuss the benefits of using equilibrium based agents.Comment: In Proceedings TARK 2015, arXiv:1606.0729

    Theory of mind and decision science: Towards a typology of tasks and computational models

    Get PDF
    The ability to form a Theory of Mind (ToM), i.e., to theorize about others’ mental states to explain and predict behavior in relation to attributed intentional states, constitutes a hallmark of human cognition. These abilities are multi-faceted and include a variety of different cognitive sub-functions. Here, we focus on decision processes in social contexts and review a number of experimental and computational modeling approaches in this field. We provide an overview of experimental accounts and formal computational models with respect to two dimensions: interactivity and uncertainty. Thereby, we aim at capturing the nuances of ToM functions in the context of social decision processes. We suggest there to be an increase in ToM engagement and multiplexing as social cognitive decision-making tasks become more interactive and uncertain. We propose that representing others as intentional and goal directed agents who perform consequential actions is elicited only at the edges of these two dimensions. Further, we argue that computational models of valuation and beliefs follow these dimensions to best allow researchers to effectively model sophisticated ToM-processes. Finally, we relate this typology to neuroimaging findings in neurotypical (NT) humans, studies of persons with autism spectrum (AS), and studies of nonhuman primates

    Lab Labor: What Can Labor Economists Learn from the Lab?

    Get PDF
    This paper surveys the contributions of laboratory experiments to labor economics. We begin with a discussion of methodological issues: why (and when) is a lab experiment the best approach; how do laboratory experiments compare to field experiments; and what are the main design issues? We then summarize the substantive contributions of laboratory experiments to our understanding of principal-agent interactions, social preferences, union-firm bargaining, arbitration, gender differentials, discrimination, job search, and labor markets more generally.personnel economics, principal-agent theory, laboratory experiments, labor economics

    A Note on the Equivalence of Rationalizability Concepts in Generalized Nice Games

    Get PDF
    Moulin (1984) describes the class of nice games for which the solution concept of point-rationalizability coincides with iterated elimination of strongly dominated strategies. As a consequence nice games have the desirable property that all rationalizability concepts determine the same strategic solution. However, nice games are characterized by rather strong assumptions. For example, only single-valued best responses are admitted and the individual strategy sets have to be convex and compact subsets of the real line R1. This note shows that equivalence of all rationalizability concepts can be extended to multi-valued best response correspondences. The surprising finding is that equivalence does not hold for individual strategy sets that are compact and convex subsets of Rn with n>1.

    Simple Reinforcement Learning Agents: Pareto Beats Nash in an Algorithmic Game Theory Study

    Get PDF
    Repeated play in games by simple adaptive agents is investigated. The agents use Q-learning, a special form of reinforcement learning, to direct learning of behavioral strategies in a number of 2×2 games. The agents are able effectively to maximize the total wealth extracted. This often leads to Pareto optimal outcomes. When the rewards signals are sufficiently clear, Pareto optimal outcomes will largely be achieved. The effect can select Pareto outcomes that are not Nash equilibria and it can select Pareto optimal outcomes among Nash equilibria

    Cooperative artificial intelligence

    Get PDF
    In the future, artificial learning agents are likely to become increasingly widespread in our society. They will interact with both other learning agents and humans in a variety of complex settings including social dilemmas. We argue that there is a need for research on the intersection between game theory and artificial intelligence, with the goal of achieving cooperative artificial intelligence that can navigate social dilemmas well. We consider the problem of how an external agent can promote cooperation between artificial learners by distributing additional rewards and punishments based on observing the learners' actions. We propose a rule for automatically learning how to create right incentives by considering the players' anticipated parameter updates. Using this learning rule leads to cooperation with high social welfare in matrix games in which the agents would otherwise learn to defect with high probability. We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off after a given number of episodes, while other games require ongoing intervention to maintain mutual cooperation. Finally, we reflect on what the goals of multi-agent reinforcement learning should be in the first place, and discuss the necessary building blocks towards the goal of building cooperative AI
    • …
    corecore