A Methodology to Evolve Cooperation in Pursuit Domain using Genetic Network Programming

Abstract

The design of strategies to devise teamwork and cooperation among agents is a central research issue in the field of multi-agent systems (MAS). The complexity of the cooperative strategy design can rise rapidly with increasing number of agents and their behavioral sophistication. The field of cooperative multi-agent learning promises solutions to such problems by attempting to discover agent behaviors as well as suggesting new approaches by applying machine learning techniques. Due to the difficulty in specifying a priori for an effective algorithm for multiple interacting agents, and the inherent adaptability of artificially evolved agents, recently, the use of evolutionary computation as a machining learning technique and a design process has received much attention. In this thesis, we design a methodology using an evolutionary computation technique called Genetic Network Programming (GNP) to automatically evolve teamwork and cooperation among agents in the pursuit domain. Simulation results show that our proposed methodology was effective in evolving teamwork and cooperation among agents. Compared with Genetic Programming approaches, its performance is significantly superior, its computation cost is less and the learning speed is faster. We also provide some analytical results of the proposed approach

    Similar works