81 research outputs found

    Rolling Horizon Coevolutionary planning for two-player video games

    Get PDF
    This paper describes a new algorithm for decision making in two-player real-time video games. As with Monte Carlo Tree Search, the algorithm can be used without heuristics and has been developed for use in general video game AI. The approach is to extend recent work on rolling horizon evolutionary planning, which has been shown to work well for single-player games, to two (or in principle many) player games. To select an action the algorithm co-evolves two (or in the general case N) populations, one for each player, where each individual is a sequence of actions for the respective player. The fitness of each individual is evaluated by playing it against a selection of action-sequences from the opposing population. When choosing an action to take in the game, the first action is chosen from the fittest member of the population for that player. The new algorithm is compared with a number of general video game AI algorithms on a two-player space battle game, with promising results

    Evolutionary Design of Game Vehicles and Their Controllers

    Get PDF
    Procedural content generation (PCG) is a growing field of interest in the domain of computational intelligence as it relates to games. There are ever increasing examples and applications of PCG that have been studied in academic contexts. Player expectations of the amount of content in games increase as computers and video game consoles are capable of using more content, and automation of content creation becomes more desirable. While many means of procedural content generation using some form of search algorithm have been tried and tested, we examine evolutionary algorithms as a means to generate content, where it has not frequently been used before. We examine the generation of vehicles, specifically spaceships, within two dimensional game simulations. These simulations are based upon a simple Newtonian physics system with different physical rules, representing games such as Lunar Lander or Asteroids, and evolve linear vectors of real numbers that act as vehicle genotypes by encoding placement of components to a vehicle point mass, with a form defined by the placement of each component. We use simple 1-ply lookahead controllers, simple rule-based controllers, and MCTS-based controllers as means to test and therefore indirectly guide the evolution of vehicle designs. We are able to demonstrate that evolutionary algorithms can be used to generate effective vehicle designs, suitable for use by the same controller as used for testing, for simple tasks without much issue. We also show that there are some factors of a problem environment that impact the demands and the conditions affecting vehicle design evolution more than others, such as velocity loss factors and the topology of the game world used. It is also evident that the use of different controllers to test vehicles causes different designs to emerge based on the strengths of said controllers

    Automatic Game Parameter Tuning using General Video Game Agents

    Get PDF
    Automatic Game Design is a subfield of Game Artificial Intelligence that aims to study the usage of AI algorithms for assisting in game design tasks. This dissertation presents a research work in this field, focusing on applying an evolutionary algorithm to video game parameterization. The task we are interested in is player experience. N-Tuple Bandit Evolutionary Algorithm (NTBEA) is an evolutionary algorithm that was recently proposed and successfully applied in game parameterization in a simple domain, which is the first experiment included in this project. To further investigating its ability in evolving game parameters, We applied NTBEA to evolve parameter sets for three General Video Game AI (GVGAI) games, because GVGAI has variety supplies of video games in different types and the framework has already been prepared for parameterization. 9 positive increasing functions were picked as target functions as representations of the player expected score trends. Our initial assumption was that the evolved games should provide the game environments that allow players to obtain score in the same trend as one of these functions. The experiment results confirm this for some functions, and prove that the NTBEA is very much capable of evolving GVGAI games to satisfy this task

    Searching by learning: Exploring artificial general intelligence on small board games by deep reinforcement learning

    Get PDF
    In deep reinforcement learning, searching and learning techniques are two important components. They can be used independently and in combination to deal with different problems in AI. These results have inspired research into artificial general intelligence (AGI).We study table based classic Q-learning on the General Game Playing (GGP) system, showing that classic Q-learning works on GGP, although convergence is slow, and it is computationally expensive to learn complex games.This dissertation uses an AlphaZero-like self-play framework to explore AGI on small games. By tuning different hyper-parameters, the role, effects and contributions of searching and learning are studied. A further experiment shows that search techniques can contribute as experts to generate better training examples to speed up the start phase of training.In order to extend the AlphaZero-likeself-play approach to single player complex games, the Morpion Solitaire game is implemented by combining Ranked Reward method. Our first AlphaZero-based approach is able to achieve a near human best record.Overall, in this thesis, both searching and learning techniques are studied (by themselves and in combination) in GGP and AlphaZero-like self-play systems. We do so for the purpose of making steps towards artificial general intelligence, towards systems that exhibit intelligent behavior in more than one domain. China Scholarship CouncilAlgorithms and the Foundations of Software technolog

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    • …
    corecore