302 research outputs found

    Adaptive Shooting for Bots in First Person Shooter Games Using Reinforcement Learning

    Full text link
    In current state-of-the-art commercial first person shooter games, computer controlled bots, also known as non player characters, can often be easily distinguishable from those controlled by humans. Tell-tale signs such as failed navigation, "sixth sense" knowledge of human players' whereabouts and deterministic, scripted behaviors are some of the causes of this. We propose, however, that one of the biggest indicators of non humanlike behavior in these games can be found in the weapon shooting capability of the bot. Consistently perfect accuracy and "locking on" to opponents in their visual field from any distance are indicative capabilities of bots that are not found in human players. Traditionally, the bot is handicapped in some way with either a timed reaction delay or a random perturbation to its aim, which doesn't adapt or improve its technique over time. We hypothesize that enabling the bot to learn the skill of shooting through trial and error, in the same way a human player learns, will lead to greater variation in game-play and produce less predictable non player characters. This paper describes a reinforcement learning shooting mechanism for adapting shooting over time based on a dynamic reward signal from the amount of damage caused to opponents.Comment: IEEE Transactions on Computational Intelligence and AI in Games (2015

    Learning Human Behavior From Observation For Gaming Applications

    Get PDF
    The gaming industry has reached a point where improving graphics has only a small effect on how much a player will enjoy a game. One focus has turned to adding more humanlike characteristics into computer game agents. Machine learning techniques are being used scarcely in games, although they do offer powerful means for creating humanlike behaviors in agents. The first person shooter (FPS), Quake 2, is an open source game that offers a multi-agent environment to create game agents (bots) in. This work attempts to combine neural networks with a modeling paradigm known as context based reasoning (CxBR) to create a contextual game observation (CONGO) system that produces Quake 2 agents that behave as a human player trains them to act. A default level of intelligence is instilled into the bots through contextual scripts to prevent the bot from being trained to be completely useless. The results show that the humanness and entertainment value as compared to a traditional scripted bot have improved, although, CONGO bots usually ranked only slightly above a novice skill level. Overall, CONGO is a technique that offers the gaming community a mode of game play that has promising entertainment value

    ViZDoom Competitions: Playing Doom from Pixels

    Full text link
    This paper presents the first two editions of Visual Doom AI Competition, held in 2016 and 2017. The challenge was to create bots that compete in a multi-player deathmatch in a first-person shooter (FPS) game, Doom. The bots had to make their decisions based solely on visual information, i.e., a raw screen buffer. To play well, the bots needed to understand their surroundings, navigate, explore, and handle the opponents at the same time. These aspects, together with the competitive multi-agent aspect of the game, make the competition a unique platform for evaluating the state of the art reinforcement learning algorithms. The paper discusses the rules, solutions, results, and statistics that give insight into the agents' behaviors. Best-performing agents are described in more detail. The results of the competition lead to the conclusion that, although reinforcement learning can produce capable Doom bots, they still are not yet able to successfully compete against humans in this game. The paper also revisits the ViZDoom environment, which is a flexible, easy to use, and efficient 3D platform for research for vision-based reinforcement learning, based on a well-recognized first-person perspective game Doom

    Using machine learning techniques to create AI controlled players for video games

    Get PDF
    This study aims to achieve higher replay and entertainment value in a game through human-like AI behaviour in computer controlled characters called bats. In order to achieve that, an artificial intelligence system capable of learning from observation of human player play was developed. The artificial intelligence system makes use of machine learning capabilities to control the state change mechanism of the bot. The implemented system was tested by an audience of gamers and compared against bats controlled by static scripts. The data collected was focused on qualitative aspects of replay and entertainment value of the game and subjected to quantitative analysi

    Study and analysis of behaviour decision methods of non-player characters in first-person shooters

    Get PDF
    Non-player characters (NPCs) have a big importance in video games because if they did not exist, games would feel monotone and without life. With the increase of complexity and realism in video games graphics, the behaviour of NPCs needs to keep up to not break the experience of the player. For that reason, new decision methods for NPCs are studied to handle complex behaviours. First person shooters (FPSs) have a big role on implementing novel ways to define behaviour decision methods of NPCs such as behaviour trees and goal-oriented action planning while other game genres end up using their standards. Some decision methods are better than others depending on what kind of behaviour we want the NPCs to possess, thus, we propose to analyse, discuss, and compare different behaviour decision methods of NPCs and implement some examples to showcase these algorithms.Personagens não jogáveis (NPCs) são um dos tópicos mais importantes dos videojogos, pois é graças a eles que os jogos se tornam mais divertidos e menos repetitivos. Com o aumento do realismo e complexidade dos videojogos, é necessário que o comportamento dos NPCs se torne também mais realista. Para resolver esse problema, vários métodos de decisão para NPC foram criados. Jogos de tiro em primeira pessoa (FPSs) são responsáveis por serem os pioneiros em técnicas tais como behaviour trees e goal oriented action planning que são agora utilizados em vários géneros de videojogos como métodos de decisão de NPC. Alguns métodos de decisão são mais apropriados do que outros, dependendo do tipo de comportamento que pretendamos que o NPC exiba. É proposto neste projeto, analisar, comparar e implementar diferentes métodos de decisão de NPCs

    Modelling Human-like Behavior through Reward-based Approach in a First-Person Shooter Game

    Get PDF
    We present two examples of how human-like behavior can be implemented in a model of computer player to improve its characteristics and decision-making patterns in video game. At first, we describe a reinforcement learning model, which helps to choose the best weapon depending on reward values obtained from shooting combat situations.Secondly, we consider an obstacle avoiding path planning adapted to the tactical visibility measure. We describe an implementation of a smoothing path model, which allows the use of penalties (negative rewards) for walking through \bad" tactical positions. We also study algorithms of path nding such as improved I-ARA* search algorithm for dynamic graph by copying human discrete decision-making model of reconsidering goals similar to Page-Rank algorithm. All the approaches demonstrate how human behavior can be modeled in applications with significant perception of intellectual agent actions

    Clyde: A deep reinforcement learning DOOM playing agent

    Get PDF
    In this paper we present the use of deep reinforcement learn-ing techniques in the context of playing partially observablemulti-agent 3D games. These techniques have traditionallybeen applied to fully observable 2D environments, or navigation tasks in 3D environments. We show the performanceof Clyde in comparison to other competitors within the con-text of the ViZDOOM competition that saw 9 bots competeagainst each other in DOOM death matches. Clyde managedto achieve 3rd place in the ViZDOOM competition held at theIEEE Conference on Computational Intelligence and Games2016. Clyde performed very well considering its relative sim-plicity and the fact that we deliberately avoided a high levelof customisation to keep the algorithm generic

    Modelling Human-like Behavior through Reward-based Approach in a First-Person Shooter Game

    Get PDF
    We present two examples of how human-like behavior can be implemented in a model of computer player to improve its characteristics and decision-making patterns in video game. At first, we describe a reinforcement learning model, which helps to choose the best weapon depending on reward values obtained from shooting combat situations.Secondly, we consider an obstacle avoiding path planning adapted to the tactical visibility measure. We describe an implementation of a smoothing path model, which allows the use of penalties (negative rewards) for walking through \bad" tactical positions. We also study algorithms of path nding such as improved I-ARA* search algorithm for dynamic graph by copying human discrete decision-making model of reconsidering goals similar to Page-Rank algorithm. All the approaches demonstrate how human behavior can be modeled in applications with significant perception of intellectual agent actions

    Counter-Strike Deathmatch with Large-Scale Behavioural Cloning

    Full text link
    This paper describes an AI agent that plays the popular first-person-shooter (FPS) video game `Counter-Strike; Global Offensive' (CSGO) from pixel input. The agent, a deep neural network, matches the performance of the medium difficulty built-in AI on the deathmatch game mode, whilst adopting a humanlike play style. Unlike much prior work in games, no API is available for CSGO, so algorithms must train and run in real-time. This limits the quantity of on-policy data that can be generated, precluding many reinforcement learning algorithms. Our solution uses behavioural cloning - training on a large noisy dataset scraped from human play on online servers (4 million frames, comparable in size to ImageNet), and a smaller dataset of high-quality expert demonstrations. This scale is an order of magnitude larger than prior work on imitation learning in FPS games.Comment: Offline Reinforcement Learning Workshop at Neural Information Processing Systems, 202

    Evolving Agents using NEAT to Achieve Human-Like Play in FPS Games

    Get PDF
    Artificial agents are commonly used in games to simulate human opponents. This allows players to enjoy games without requiring them to play online or with other players locally. Basic approaches tend to suffer from being unable to adapt strategies and often perform tasks in ways very few human players could ever achieve. This detracts from the immersion or realism of the gameplay. In order to achieve more human-like play more advanced approaches are employed in order to either adapt to the player's ability level or to cause the agent to play more like a human player can or would. Utilizing artificial neural networks evolved using the NEAT methodology, we attempt to produce agents to play a FPS-style game. The goal is to see if the approach produces well-playing agents with potentially human-like behaviors. We provide a large number of sensors and motors to the neural networks of a small population learning through co-evolution. Ultimately we find that the approach has limitations and is generally too slow for practical application, but holds promise for future developments. Many extensions are presented which could improve the results and reduce training times. The agents learned to perform some basic tasks at a very rough level of skill, but were not competitive at even a beginner level
    • …
    corecore