9 research outputs found

    Arguing Using Opponent Models

    Get PDF
    Peer reviewedPostprin

    Learning in Multi-Agent Information Systems - A Survey from IS Perspective

    Get PDF
    Multiagent systems (MAS), long studied in artificial intelligence, have recently become popular in mainstream IS research. This resurgence in MAS research can be attributed to two phenomena: the spread of concurrent and distributed computing with the advent of the web; and a deeper integration of computing into organizations and the lives of people, which has led to increasing collaborations among large collections of interacting people and large groups of interacting machines. However, it is next to impossible to correctly and completely specify these systems a priori, especially in complex environments. The only feasible way of coping with this problem is to endow the agents with learning, i.e., an ability to improve their individual and/or system performance with time. Learning in MAS has therefore become one of the important areas of research within MAS. In this paper we present a survey of important contributions made by IS researchers to the field of learning in MAS, and present directions for future research in this area

    Existence of Multiagent Equilibria with Limited Agents

    Full text link
    Multiagent learning is a necessary yet challenging problem as multiagent systems become more prevalent and environments become more dynamic. Much of the groundbreaking work in this area draws on notable results from game theory, in particular, the concept of Nash equilibria. Learners that directly learn an equilibrium obviously rely on their existence. Learners that instead seek to play optimally with respect to the other players also depend upon equilibria since equilibria are fixed points for learning. From another perspective, agents with limitations are real and common. These may be undesired physical limitations as well as self-imposed rational limitations, such as abstraction and approximation techniques, used to make learning tractable. This article explores the interactions of these two important concepts: equilibria and limitations in learning. We introduce the question of whether equilibria continue to exist when agents have limitations. We look at the general effects limitations can have on agent behavior, and define a natural extension of equilibria that accounts for these limitations. Using this formalization, we make three major contributions: (i) a counterexample for the general existence of equilibria with limitations, (ii) sufficient conditions on limitations that preserve their existence, (iii) three general classes of games and limitations that satisfy these conditions. We then present empirical results from a specific multiagent learning algorithm applied to a specific instance of limited agents. These results demonstrate that learning with limitations is feasible, when the conditions outlined by our theoretical analysis hold

    Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems

    Get PDF
    Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.Comment: Final manuscript (46 pages), published in Artificial Intelligence Journal. The arXiv version also contains a table of contents after the abstract, but is otherwise identical to the AIJ version. Keywords: autonomous agents, multiagent systems, modelling other agents, opponent modellin

    Self–organised multi agent system for search and rescue operations

    Get PDF
    Autonomous multi-agent systems perform inadequately in time critical missions, while they tend to explore exhaustively each location of the field in one phase with out selecting the pertinent strategy. This research aims to solve this problem by introducing a hierarchy of exploration strategies. Agents explore an unknown search terrain with complex topology in multiple predefined stages by performing pertinent strategies depending on their previous observations. Exploration inside unknown, cluttered, and confined environments is one of the main challenges for search and rescue robots inside collapsed buildings. In this regard we introduce our novel exploration algorithm for multi–agent system, that is able to perform a fast, fair, and thorough search as well as solving the multi–agent traffic congestion. Our simulations have been performed on different test environments in which the complexity of the search field has been defined by fractal dimension of Brownian movements. The exploration stages are depicted as defined arenas of National Institute of Standard and Technology (NIST). NIST introduced three scenarios of progressive difficulty: yellow, orange, and red. The main concentration of this research is on the red arena with the least structure and most challenging parts to robot nimbleness

    Model-based Learning of Interaction Strategies in Multi-agent Systems

    No full text
    Agents that operate in a multi-agent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interaction strategy is a hard problem because it depends mostly on the behavior of the others. One way to deal with this problem is to endow the agents with the ability to adapt their strategies based on their interaction experience. This work views interaction as a repeated game and presents a general architecture for a model-based agent that learns models of the rival agents for exploitation in future encounters. First, we describe a method for inferring an optimal strategy against a given model of another agent. Second, we present an unsupervised algorithm that infers a model of the opponent's strategy from its interaction behavior in the past. We then present a method for incorporating exploration strategies into model-based learning. We report experimental results demonstrating the superiority of the model-based learning agent over..
    corecore