1,066 research outputs found

    Improving Computer Game Bots\u27 behavior using Q-Learning

    Get PDF
    In modern computer video games, the quality of artificial characters plays a prominent role in the success of the game in the market. The aim of intelligent techniques, termed game AI, used in these games is to provide an interesting and challenging game play to a game player. Being highly sophisticated, these games present game developers with similar kind of requirements and challenges as faced by academic AI community. The game companies claim to use sophisticated game AI to model artificial characters such as computer game bots, intelligent realistic AI agents. However, these bots work via simple routines pre-programmed to suit the game map, game rules, game type, and other parameters unique to each game. Mostly, illusive intelligent behaviors are programmed using simple conditional statements and are hard-coded in the bots\u27 logic. Moreover, a game programmer has to spend considerable time configuring crisp inputs for these conditional statements. Therefore, we realize a need for machine learning techniques to dynamically improve bots\u27 behavior and save precious computer programmers\u27 man-hours. So, we selected Q-learning, a reinforcement learning technique, to evolve dynamic intelligent bots, as it is a simple, efficient, and online learning algorithm. Machine learning techniques such as reinforcement learning are know to be intractable if they use a detailed model of the world, and also requires tuning of various parameters to give satisfactory performance. Therefore, for this research we opt to examine Q-learning for evolving a few basic behaviors viz. learning to fight, and planting the bomb for computer game bots. Furthermore, we experimented on how bots would use knowledge learned from abstract models to evolve its behavior in more detailed model of the world. Bots evolved using these techniques would become more pragmatic, believable and capable of showing human-like behavior. This will provide more realistic feel to the game and provide game programmers with an efficient learning technique for programming these bots

    Procedural Content Generation for Real-Time Strategy Games

    Get PDF
    Videogames are one of the most important and profitable sectors in the industry of entertainment. Nowadays, the creation of a videogame is often a large-scale endeavor and bears many similarities with, e.g., movie production. On the central tasks in the development of a videogame is content generation, namely the definition of maps, terrains, non-player characters (NPCs) and other graphical, musical and AI-related components of the game. Such generation is costly due to its complexity, the great amount of work required and the need of specialized manpower. Hence the relevance of optimizing the process and alleviating costs. In this sense, procedural content generation (PCG) comes in handy as a means of reducing costs by using algorithmic techniques to automatically generate some game contents. PCG also provides advantages in terms of player experience since the contents generated are typically not fixed but can vary in different playing sessions, and can even adapt to the player herself. For this purpose, the underlying algorithmic technique used for PCG must be also flexible and adaptable. This is the case of computational intelligence in general and evolutionary algorithms in particular. In this work we shall provide an overview of the use of evolutionary intelligence for PCG, with special emphasis on its use within the context of real-time strategy games. We shall show how these techniques can address both playability and aesthetics, as well as improving the game AI

    Strategic negotiation and trust in diplomacy - the DipBlue approach

    Get PDF
    The study of games in Artificial Intelligence has a long tradition. Game playing has been a fertile environment for the development of novel approaches to build intelligent programs. Multi-agent systems (MAS), in particular, are a very useful paradigm in this regard, not only because multi-player games can be addressed using this technology, but most importantly because social aspects of agenthood that have been studied for years by MAS researchers can be applied in the attractive and controlled scenarios that games convey. Diplomacy is a multi-player strategic zero-sum board game, including as main research challenges an enormous search tree, the difficulty of determining the real strength of a position, and the accommodation of negotiation among players. Negotiation abilities bring along other social aspects, such as the need to perform trust reasoning in order to win the game. The majority of existing artificial players (bots) for Diplomacy do not exploit the strategic opportunities enabled by negotiation, focusing instead on search and heuristic approaches. This paper describes the development of DipBlue, an artificial player that uses negotiation in order to gain advantage over its opponents, through the use of peace treaties, formation of alliances and suggestion of actions to allies. A simple trust assessment approach is used as a means to detect and react to potential betrayals by allied players. DipBlue was built to work with DipGame, a MAS testbed for Diplomacy, and has been tested with other players of the same platform and variations of itself. Experimental results show that the use of negotiation increases the performance of bots involved in alliances, when full trust is assumed. In the presence of betrayals, being able to perform trust reasoning is an effective approach to reduce their impact. © Springer-Verlag Berlin Heidelberg 2015

    ASPIRE Adaptive strategy prediction in a RTS environment

    Get PDF
    When playing a Real Time Strategy(RTS) game against the non-human player(bot) it is important that the bot can do different strategies to create a challenging experience over time. In this thesis we aim to improve the way the bot can predict what strategies the player is doing by analyzing the replays of the given players games. This way the bot can change its strategy based upon the known knowledge of the game state and what strategies the player have used before. We constructed a Bayesian Network to handle the predictions of the opponent's strategy and inserted that into a preexisting bot. Based on the results from our experiments we can state that the Bayesian Network adapted to the strategies our bot was exposed to. In addition we can see that the Bayesian Network only predicted the possible strategies given the obtained information about the game state.INFO390MASV-INF

    Deep learning for video game playing

    Get PDF
    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards

    Game Artificial Intelligence: Challenges for the Scientific Community

    Get PDF
    This paper discusses some of the most interesting challenges to which the games research community members may face in the área of the application of arti cial or computational intelligence techniques to the design and creation of video games. The paper focuses on three lines that certainly will in uence signi cantly the industry of game development in the near future, speci cally on the automatic generation of content, the a ective computing applied to video games and the generation of behaviors that manage the decisions of entities not controlled by the human player.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Improving Behavior of Computer Game Bots Using Fictitious Play

    Get PDF
    In modern computer games, "bots" -intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games player, a player might lose interest in the game. We propose the use of a game theoretic based learning rule called fictitious play for improving behavior of these computer game bots which will make them less predictable and hence, more a enjoyable game
    corecore