420 research outputs found

    Using machine learning techniques to create AI controlled players for video games

    Get PDF
    This study aims to achieve higher replay and entertainment value in a game through human-like AI behaviour in computer controlled characters called bats. In order to achieve that, an artificial intelligence system capable of learning from observation of human player play was developed. The artificial intelligence system makes use of machine learning capabilities to control the state change mechanism of the bot. The implemented system was tested by an audience of gamers and compared against bats controlled by static scripts. The data collected was focused on qualitative aspects of replay and entertainment value of the game and subjected to quantitative analysi

    Playing Smart - Another Look at Artificial Intelligence in Computer Games

    Get PDF

    ASPIRE Adaptive strategy prediction in a RTS environment

    Get PDF
    When playing a Real Time Strategy(RTS) game against the non-human player(bot) it is important that the bot can do different strategies to create a challenging experience over time. In this thesis we aim to improve the way the bot can predict what strategies the player is doing by analyzing the replays of the given players games. This way the bot can change its strategy based upon the known knowledge of the game state and what strategies the player have used before. We constructed a Bayesian Network to handle the predictions of the opponent's strategy and inserted that into a preexisting bot. Based on the results from our experiments we can state that the Bayesian Network adapted to the strategies our bot was exposed to. In addition we can see that the Bayesian Network only predicted the possible strategies given the obtained information about the game state.INFO390MASV-INF

    Virtual player design using self-learning via competitive coevolutionary algorithms

    Get PDF
    The Google Artificial Intelligence (AI) Challenge is an international contest the objective of which is to program the AI in a two-player real time strategy (RTS) game. This AI is an autonomous computer program that governs the actions that one of the two players executes during the game according to the state of play. The entries are evaluated via a competition mechanism consisting of two-player rounds where each entry is tested against others. This paper describes the use of competitive coevolutionary (CC) algorithms for the automatic generation of winning game strategies in Planet Wars, the RTS game associated with the 2010 contest. Three different versions of a prime algorithm have been tested. Their common nexus is not only the use of a Hall-of-Fame (HoF) to keep note of the winners of past coevolutions but also the employment of an archive of experienced players, termed the hall-of-celebrities (HoC), that puts pressure on the optimization process and guides the search to increase the strength of the solutions; their differences come from the periodical updating of the HoF on the basis of quality and diversity metrics. The goal is to optimize the AI by means of a self-learning process guided by coevolutionary search and competitive evaluation. An empirical study on the performance of a number of variants of the proposed algorithms is described and a statistical analysis of the results is conducted. In addition to the attainment of competitive bots we also conclude that the incorporation of the HoC inside the primary algorithm helps to reduce the effects of cycling caused by the use of HoF in CC algorithms.This work is partially supported by Spanish MICINN under Project ANYSELF (TIN2011-28627-C04-01),3 by Junta de Andalucía under Project P10-TIC-6083 (DNEMESIS) and by Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech

    A Real-time Strategy Agent Framework and Strategy Classifier for Computer Generated Forces

    Get PDF
    This research effort is concerned with the advancement of computer generated forces AI for Department of Defense (DoD) military training and education. The vision of this work is agents capable of perceiving and intelligently responding to opponent strategies in real-time. Our research goal is to lay the foundations for such an agent. Six research objectives are defined: 1) Formulate a strategy definition schema effective in defining a range of RTS strategies. 2) Create eight strategy definitions via the schema. 3) Design a real-time agent framework that plays the game according to the given strategy definition. 4) Generate an RTS data set. 5) Create an accurate and fast executing strategy classifier. 6) Find the best counterstrategies for each strategy definition. The agent framework is used to play the eight strategies against each other and generate a data set of game observations. To classify the data, we first perform feature reduction using principal component analysis or linear discriminant analysis. Two classifier techniques are employed, k-means clustering with k-nearest neighbor and support vector machine. The resulting classifier is 94.1% accurate with an average classification execution speed of 7.14 us. Our research effort has successfully laid the foundations for a dynamic strategy agent

    Adapting In-Game Agent Behavior by Observation of Players Using Learning Behavior Trees

    Get PDF
    In this paper we describe Learning Behavior Trees, an extension of the popular game AI scripting technique. Behavior Trees provide an effective way for expert designers to describe complex, in-game agent behaviors. Scripted AI captures human intuition about the structure of behavioral decisions, but suffers from brittleness and lack of the natural variation seen in human players. Learning Behavior Trees are designed by a human designer, but then are trained by observation of players performing the same role, to introduce human-like variation to the decision structure. We show that, using this model, a single hand-designed Behavior Tree can cover a wide variety of player behavior variations in a simplified Massively Multiplayer Online Role-Playing Game

    Rapid adaptation of video game AI

    Get PDF

    Tactical AI in Real Time Strategy Games

    Get PDF
    The real time strategy (RTS) tactical decision making problem is a difficult problem. It is generally more complex due to its high degree of time sensitivity. This research effort presents a novel approach to this problem within an educational, teaching objective. Particular decision focus is target selection for a artificial intelligence (AI) RTS game model. The use of multi-objective evolutionary algorithms (MOEAs) in this tactical decision making problem allows an AI agent to make fast, effective solutions that do not require modification to the current environment. This approach allows for the creation of a generic solution building tool that is capable of performing well against scripted opponents without requiring expert training or deep tree searches. The experimental results validate that MOEAs can control an on-line agent capable of out performing a variety AI RTS opponent test scripts
    • …
    corecore