18 research outputs found

    Conjectural Equilibrium in Multiagent Learning

    Full text link
    Learning in a multiagent environment is complicated by the fact that as other agents learn, the environment effectively changes. Moreover, other agents' actions are often not directly observable, and the actions taken by the learning agent can strongly bias which range of behaviors are encountered. We define the concept of a conjectural equilibrium, where all agents' expectations are realized, and each agent responds optimally to its expectations. We present a generic multiagent exchange situation, in which competitive behavior constitutes a conjectural equilibrium. We then introduce an agent that executes a more sophisticated strategic learning strategy, building a model of the response of other agents. We find that the system reliably converges to a conjectural equilibrium, but that the final result achieved is highly sensitive to initial belief. In essence, the strategic learner's actions tend to fulfill its expectations. Depending on the starting point, the agent may be better or worse off than had it not attempted to learn a model of the other agents at all.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46952/1/10994_2004_Article_186718.pd

    Learning in Multi-Agent Information Systems - A Survey from IS Perspective

    Get PDF
    Multiagent systems (MAS), long studied in artificial intelligence, have recently become popular in mainstream IS research. This resurgence in MAS research can be attributed to two phenomena: the spread of concurrent and distributed computing with the advent of the web; and a deeper integration of computing into organizations and the lives of people, which has led to increasing collaborations among large collections of interacting people and large groups of interacting machines. However, it is next to impossible to correctly and completely specify these systems a priori, especially in complex environments. The only feasible way of coping with this problem is to endow the agents with learning, i.e., an ability to improve their individual and/or system performance with time. Learning in MAS has therefore become one of the important areas of research within MAS. In this paper we present a survey of important contributions made by IS researchers to the field of learning in MAS, and present directions for future research in this area

    Learning from failure

    Get PDF
    We study decentralized learning in organizations. Decentralization is captured through a symmetry constraint on agents’ strategies. Among such attainable strategies, we solve for optimal and equilibrium strategies. We model the organization as a repeated game with imperfectly observable actions. A fixed but unknown subset of action profiles are successes and all other action profiles are failures. The game is played until either there is a success or the time horizon is reached. For any time horizon, including infinity, we demonstrate existence of optimal attainable strategies and show that they are Nash equilibria. For some time horizons, we can solve explicitly for the optimal attainable strategies and show uniqueness. The solution connects the learning behavior of agents to the fundamentals that characterize the organization: Agents in the organization respond more slowly to failure as the future becomes more important, the size of the organization increases and the probability of success decreases.Game theory

    Learning from failure

    Get PDF
    We study decentralized learning in organizations. Decentralization is captured through a symmetry constraint on agents’ strategies. Among such attainable strategies, we solve for optimal and equilibrium strategies. We model the organization as a repeated game with imperfectly observable actions. A fixed but unknown subset of action profiles are successes and all other action profiles are failures. The game is played until either there is a success or the time horizon is reached. For any time horizon, including infinity, we demonstrate existence of optimal attainable strategies and show that they are Nash equilibria. For some time horizons, we can solve explicitly for the optimal attainable strategies and show uniqueness. The solution connects the learning behavior of agents to the fundamentals that characterize the organization: Agents in the organization respond more slowly to failure as the future becomes more important, the size of the organization increases and the probability of success decreases.Game theory
    corecore