5 research outputs found

    Towards Believable Resource Gathering Behaviours in Real-time Strategy Games with a Memetic Ant Colony System

    Get PDF
    AbstractIn this paper, the resource gathering problem in real-time strategy (RTS) games, is modeled as a path-finding problem where game agents responsible for gathering resources, also known as harvesters, are only equipped with the knowledge of its immediate sur- roundings and must gather knowledge about the dynamics of the navigation graph that it resides on by sharing information and cooperating with other agents in the game environment. This paper proposed the conceptual modeling of a memetic ant colony system (MACS) for believable resource gathering in RTS games. In the proposed MACS, the harvester's path-finding and resource gathering knowledge captured are extracted and represented as memes, which are internally encoded as state transition rules (mem- otype), and externally expressed as ant pheromone on the graph edge (sociotype). Through the inter-play between the memetic evolution and ant colony, harvesters as memetic automatons spawned from an ant colony are able to acquire increasing level of capability in exploring complex dynamic game environment and gathering resources in an adaptive manner, producing consistent and impressive resource gathering behaviors

    Two-Phase Multi-Swarm PSO and the Dynamic Vehicle Routing Problem

    Get PDF
    Abstract-In this paper a new 2-phase multi-swarm Particle Swarm Optimization approach to solving Dynamic Vehicle Routing Problem is proposed and compared with our previous single-swarm approach and with the PSO-based method proposed by other authors. Furthermore, several evaluation functions and problem encodings are proposed and experimentally verified on a set of standard benchmark sets. For a cut-off time set in the middle of a day our method found new best-literature results for 17 out of 21 tested problem instances

    Simulated Experince Evaluation in Developing Multi-agent Coordination Graphs

    Get PDF
    Cognitive science has proposed that a way people learn is through self-critiquing by generating \u27what-if\u27 strategies for events (simulation). It is theorized that people use this method to learn something new as well as to learn more quickly. This research adds this concept to a graph-based genetic program. Memories are recorded during fitness assessment and retained in a global memory bank based on the magnitude of change in the agent’s energy and age of the memory. Between generations, candidate agents perform in simulations of the stored memories. Candidates that perform similarly to good memories and differently from bad memories are more likely to be included in the next generation. The simulation-informed genetic program is evaluated in two domains: sequence matching and Robocode. Results indicate the algorithm does not perform equally in all environments. In sequence matching, experiential evaluation fails to perform better than the control. However, in Robocode, the experiential evaluation method initially outperforms the control then stagnates and often regresses. This is likely an indication that the algorithm is over-learning a single solution rather than adapting to the environment and that learning through simulation includes a satisficing component
    corecore