6 research outputs found

    Learning in Multi-Agent Information Systems - A Survey from IS Perspective

    Get PDF
    Multiagent systems (MAS), long studied in artificial intelligence, have recently become popular in mainstream IS research. This resurgence in MAS research can be attributed to two phenomena: the spread of concurrent and distributed computing with the advent of the web; and a deeper integration of computing into organizations and the lives of people, which has led to increasing collaborations among large collections of interacting people and large groups of interacting machines. However, it is next to impossible to correctly and completely specify these systems a priori, especially in complex environments. The only feasible way of coping with this problem is to endow the agents with learning, i.e., an ability to improve their individual and/or system performance with time. Learning in MAS has therefore become one of the important areas of research within MAS. In this paper we present a survey of important contributions made by IS researchers to the field of learning in MAS, and present directions for future research in this area

    Centralized Versus Decentralized Team Coordination Using Dynamic Scripting

    Get PDF
    Computer generated forces (CGFs) must display realistic behavior for tactical training simulations to yield an effective training experience. Tradionally, the behavior of CGFs is scripted. However, there are three drawbacks, viz. (1) scripting limits the adaptive behavior of CGFs, (2) creating scripts is difficult and (3) it requires scarce domain expertise. A promising machine learning technique is the dynamic scripting of CGF behavior. In simulating air combat scenarios, team behavior is important, both with and without communication. While dynamic scripting has been reported to be effective in creating behavior for single fighters, it has not often been used for team coordination. The dynamic scripting technique is sufficiently flexible to be used for different team coordination methods. In this paper, we report the first results on centralized coordination of dynamically scripted air combat teams, and compare these results to a decentralized approach from earlier work. We find that using the centralized approach leads to higher performance and more efficient learning, although creativity of the solutions seems bounded by the reduced complexity

    A Comprehensive Survey of Multiagent Reinforcement Learning

    Full text link

    Distributed task allocation optimisation techniques in multi-agent systems

    Get PDF
    A multi-agent system consists of a number of agents, which may include software agents, robots, or even humans, in some application environment. Multi-robot systems are increasingly being employed to complete jobs and missions in various fields including search and rescue, space and underwater exploration, support in healthcare facilities, surveillance and target tracking, product manufacturing, pick-up and delivery, and logistics. Multi-agent task allocation is a complex problem compounded by various constraints such as deadlines, agent capabilities, and communication delays. In high-stake real-time environments, such as rescue missions, it is difficult to predict in advance what the requirements of the mission will be, what resources will be available, and how to optimally employ such resources. Yet, a fast response and speedy execution are critical to the outcome. This thesis proposes distributed optimisation techniques to tackle the following questions: how to maximise the number of assigned tasks in time restricted environments with limited resources; how to reach consensus on an execution plan across many agents, within a reasonable time-frame; and how to maintain robustness and optimality when factors change, e.g. the number of agents changes. Three novel approaches are proposed to address each of these questions. A novel algorithm is proposed to reassign tasks and free resources that allow the completion of more tasks. The introduction of a rank-based system for conflict resolution is shown to reduce the time for the agents to reach consensus while maintaining equal number of allocations. Finally, this thesis proposes an adaptive data-driven algorithm to learn optimal strategies from experience in different scenarios, and to enable individual agents to adapt their strategy during execution. A simulated rescue scenario is used to demonstrate the performance of the proposed methods compared with existing baseline methods

    Time constraint agents? coordination and learning in cooperative multi-agent system

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore