3 research outputs found

    Rolling horizon methods for games with continuous states and actions

    Get PDF
    It is often the case that games have continuous dynamics and allow for continuous actions, possibly with with some added noise. For larger games with complicated dynamics, having agents learn offline behaviours in such a setting is a daunting task. On the other hand, provided a generative model is available, one might try to spread the cost of search/learning in a rolling horizon fashion (e.g. as in Monte Carlo Tree Search). In this paper we compare T-HOLOP (Truncated Hierarchical Open Loop Planning), an open loop planning algorithm at least partially inspired by MCTS, with a version of evolutionary planning that uses CMA-ES (which we call EVO-P) in two planning benchmark problems (Inverted Pendulum and the Double Integrator) and in Lunar Lander, a classic arcade game. We show that EVO-P outperforms T-HOLOP in the classic benchmarks, while T-HOLOP is unable to find a solution using the same heuristics. We conclude that off-the-shelf evolutionary algorithms can be used successfully in a rolling horizon setting, and that a different type of heuristics might be needed under different optimisation algorithms

    Evolutionary Design of Game Vehicles and Their Controllers

    Get PDF
    Procedural content generation (PCG) is a growing field of interest in the domain of computational intelligence as it relates to games. There are ever increasing examples and applications of PCG that have been studied in academic contexts. Player expectations of the amount of content in games increase as computers and video game consoles are capable of using more content, and automation of content creation becomes more desirable. While many means of procedural content generation using some form of search algorithm have been tried and tested, we examine evolutionary algorithms as a means to generate content, where it has not frequently been used before. We examine the generation of vehicles, specifically spaceships, within two dimensional game simulations. These simulations are based upon a simple Newtonian physics system with different physical rules, representing games such as Lunar Lander or Asteroids, and evolve linear vectors of real numbers that act as vehicle genotypes by encoding placement of components to a vehicle point mass, with a form defined by the placement of each component. We use simple 1-ply lookahead controllers, simple rule-based controllers, and MCTS-based controllers as means to test and therefore indirectly guide the evolution of vehicle designs. We are able to demonstrate that evolutionary algorithms can be used to generate effective vehicle designs, suitable for use by the same controller as used for testing, for simple tasks without much issue. We also show that there are some factors of a problem environment that impact the demands and the conditions affecting vehicle design evolution more than others, such as velocity loss factors and the topology of the game world used. It is also evident that the use of different controllers to test vehicles causes different designs to emerge based on the strengths of said controllers
    corecore