397 research outputs found

    Ms Pac-Man versus Ghost Team CEC 2011 competition

    Get PDF
    Games provide an ideal test bed for computational intelligence and significant progress has been made in recent years, most notably in games such as Go, where the level of play is now competitive with expert human play on smaller boards. Recently, a significantly more complex class of games has received increasing attention: real-time video games. These games pose many new challenges, including strict time constraints, simultaneous moves and open-endedness. Unlike in traditional board games, computational play is generally unable to compete with human players. One driving force in improving the overall performance of artificial intelligence players are game competitions where practitioners may evaluate and compare their methods against those submitted by others and possibly human players as well. In this paper we introduce a new competition based on the popular arcade video game Ms Pac-Man: Ms Pac-Man versus Ghost Team. The competition, to be held at the Congress on Evolutionary Computation 2011 for the first time, allows participants to develop controllers for either the Ms Pac-Man agent or for the Ghost Team and unlike previous Ms Pac-Man competitions that relied on screen capture, the players now interface directly with the game engine. In this paper we introduce the competition, including a review of previous work as well as a discussion of several aspects regarding the setting up of the game competition itself. © 2011 IEEE

    Fast Approximate Max-n Monte Carlo Tree Search for Ms Pac-Man

    Get PDF
    We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude. © 2011 IEEE

    Single- versus Multiobjective Optimization for Evolution of Neural Controllers in Ms. Pac-Man

    Get PDF
    The objective of this study is to focus on the automatic generation of game artificial intelligence (AI) controllers for Ms. Pac-Man agent by using artificial neural network (ANN) and multiobjective artificial evolution. The Pareto Archived Evolution Strategy (PAES) is used to generate a Pareto optimal set of ANNs that optimize the conflicting objectives of maximizing Ms. Pac-Man scores (screen-capture mode) and minimizing neural network complexity. This proposed algorithm is called Pareto Archived Evolution Strategy Neural Network or PAESNet. Three different architectures of PAESNet were investigated, namely, PAESNet with fixed number of hidden neurons (PAESNet_F), PAESNet with varied number of hidden neurons (PAESNet_V), and the PAESNet with multiobjective techniques (PAESNet_M). A comparison between the single- versus multiobjective optimization is conducted in both training and testing processes. In general, therefore, it seems that PAESNet_F yielded better results in training phase. But the PAESNet_M successfully reduces the runtime operation and complexity of ANN by minimizing the number of hidden neurons needed in hidden layer and also it provides better generalization capability for controlling the game agent in a nondeterministic and dynamic environment

    Neuroevolution in Games: State of the Art and Open Challenges

    Get PDF
    This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyse the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The article also highlights important open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table (Table 1

    Ms. Pac-Man Versus Ghost Team CIG 2016 competition

    Get PDF
    This paper introduces the revival of the popular Ms. Pac-Man Versus Ghost Team competition. We present an updated game engine with Partial Observability constraints, a new Multi-Agent Systems approach to developing Ghost agents, and several sample controllers to ease the development of entries. A restricted communication protocol is provided for the Ghosts, providing a more challenging environment than before. The competition will debut at the IEEE Computational Intelligence and Games Conference 2016. Some preliminary results showing the effects of Partial Observability and the benefits of simple communication are also presented

    Using NEAT for Continuous Adaptation and Teamwork Formation in Pacman

    Get PDF
    Despite games often being used as a testbed for new computational intelligence techniques, the majority of artificial intelligence in commercial games is scripted. This means that the computer agents are non-adaptive and often inherently exploitable because of it. In this paper, we describe a learning system designed for team strategy development in a real time multi-agent domain. We test our system in the game of Pacman, evolving adaptive strategies for the ghosts in simulated real time against a competent Pacman player. Our agents (the ghosts) are controlled by neural networks, whose weights and structure are incrementally evolved via an implementation of the NEAT (Neuro-Evolution of Augmenting Topologies) algorithm. We demonstrate the design and successful implementation of this system by evolving a number of interesting and complex team strategies that outperform the ghosts\u27 strategies of the original arcade version of the game

    Artificial intelligence in co-operative games with partial observability

    Get PDF
    This thesis investigates Artificial Intelligence in co-operative games that feature Partial Observability. Most video games feature a combination of both co-operation, as well as Partial Observability. Co-operative games are games that feature a team of at least two agents, that must achieve a shared goal of some kind. Partial Observability is the restriction of how much of an environment that an agent can observe. The research performed in this thesis examines the challenge of creating Artificial Intelligence for co-operative games that feature Partial Observability. The main contributions are that Monte-Carlo Tree Search outperforms Genetic Algorithm based agents in solving co-operative problems without communication, the creation of a co-operative Partial Observability competition promoting Artificial Intelligence research as well as an investigation of the effect of varying Partial Observability to Artificial Intelligence, and finally the creation of a high performing Monte-Carlo Tree Search agent for the game Hanabi that uses agent modelling to rationalise about other players
    corecore