802 research outputs found

    Deep learning for video game playing

    Get PDF
    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards

    Developing an Effective and Efficient Real Time Strategy Agent for Use as a Computer Generated Force

    Get PDF
    Computer Generated Forces (CGF) are used to represent units or individuals in military training and constructive simulation. The use of CGF significantly reduces the time and money required for effective training. For CGF to be effective, they must behave as a human would in the same environment. Real Time Strategy (RTS) games place players in control of a large force whose goal is to defeat the opponent. The military setting of RTS games makes them an excellent platform for the development and testing of CGF. While there has been significant research in RTS agent development, most of the developed agents are only able to exhibit good tactical behavior, lacking the ability to develop and execute overall strategies. By analyzing prior games played by an opposing agent, an RTS agent can determine the opponent\u27s strengths and weaknesses and develop a strategy which neutralizes the strengths and capitalizes on the weaknesses. It can then execute this strategy in an RTS game. This research develops such an RTS agent called the Killer Bee Artificial Intelligence (KBAI). KBAI builds a classifier for an opposing RTS agent which allows it to predict game outcomes. It then takes this classifier, uses it to generate an effective counter-strategy, and executes the tactics required for the strategy. KBAI is both effective and efficient against four high-quality scripted agents: it wins 100% of the time, and it wins quickly. When compared to native artificial intelligence, KBAI has superior performance. It exhibits strategic behavior, as well as the tactics required to execute a developed strategy

    A Survey of Monte Carlo Tree Search Methods

    Get PDF
    Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

    Influence map-based pathfinding algorithms in video games

    Get PDF
    Path search algorithms, i.e., pathfinding algorithms, are used to solve shortest path problems by intelligent agents, ranging from computer games and applications to robotics. Pathfinding is a particular kind of search, in which the objective is to find a path between two nodes. A node is a point in space where an intelligent agent can travel. Moving agents in physical or virtual worlds is a key part of the simulation of intelligent behavior. If a game agent is not able to navigate through its surrounding environment without avoiding obstacles, it does not seem intelligent. Hence the reason why pathfinding is among the core tasks of AI in computer games. Pathfinding algorithms work well with single agents navigating through an environment. In realtime strategy (RTS) games, potential fields (PF) are used for multi-agent navigation in large and dynamic game environments. On the contrary, influence maps are not used in pathfinding. Influence maps are a spatial reasoning technique that helps bots and players to take decisions about the course of the game. Influence map represent game information, e.g., events and faction power distribution, and is ultimately used to provide game agents knowledge to take strategic or tactical decisions. Strategic decisions are based on achieving an overall goal, e.g., capture an enemy location and win the game. Tactical decisions are based on small and precise actions, e.g., where to install a turret, where to hide from the enemy. This dissertation work focuses on a novel path search method, that combines the state-of-theart pathfinding algorithms with influence maps in order to achieve better time performance and less memory space performance as well as more smooth paths in pathfinding.Algoritmos de pathfinding são usados por agentes inteligentes para resolver o problema do caminho mais curto, desde a àrea jogos de computador até à robótica. Pathfinding é um tipo particular de algoritmos de pesquisa, em que o objectivo é encontrar o caminho mais curto entre dois nós. Um nó é um ponto no espaço onde um agente inteligente consegue navegar. Agentes móveis em mundos físicos e virtuais são uma componente chave para a simulação de comportamento inteligente. Se um agente não for capaz de navegar no ambiente que o rodeia sem colidir com obstáculos, não aparenta ser inteligente. Consequentemente, pathfinding faz parte das tarefas fundamentais de inteligencia artificial em vídeo jogos. Algoritmos de pathfinding funcionam bem com agentes únicos a navegar por um ambiente. Em jogos de estratégia em tempo real (RTS), potential fields (PF) são utilizados para a navegação multi-agente em ambientes amplos e dinâmicos. Pelo contrário, os influence maps não são usados no pathfinding. Influence maps são uma técnica de raciocínio espacial que ajudam agentes inteligentes e jogadores a tomar decisões sobre o decorrer do jogo. Influence maps representam informação de jogo, por exemplo, eventos e distribuição de poder, que são usados para fornecer conhecimento aos agentes na tomada de decisões estratégicas ou táticas. As decisões estratégicas são baseadas em atingir uma meta global, por exemplo, a captura de uma zona do inimigo e ganhar o jogo. Decisões táticas são baseadas em acções pequenas e precisas, por exemplo, em que local instalar uma torre de defesa, ou onde se esconder do inimigo. Esta dissertação foca-se numa nova técnica que consiste em combinar algoritmos de pathfinding com influence maps, afim de alcançar melhores performances a nível de tempo de pesquisa e consumo de memória, assim como obter caminhos visualmente mais suaves

    Study of artificial intelligence algorithms applied to the generation of non-playable characters in arcade games

    Full text link
    En la actualidad, el auge de la Inteligencia Artificial en diversos campos está llevando a un aumento en la investigación que se lleva a cabo en ella. Uno de estos campos es el de los videojuegos. Desde el inicio de los videojuegos, ha primado la experiencia del usuario en términos de jugabilidad y gráficos, sobre todo, prestando menor atención a la Inteligencia Artificial. Ahora, debido a que cada vez se dispone de mejores máquinas que pueden realizar acciones computacionalmente más caras con menor dificultad, se están pudiendo aplicar técnicas de Inteligencia Artificial más complejas y que aportan mejor funcionamiento y dotan a los juegos de mayor realismo. Este es el caso, por ejemplo, de la creación de agentes inteligentes que imitan el comportamiento humano de una manera más realista. En los últimos años, se han creado diversas competiciones para desarrollar y analizar técnicas de Inteligencia Artificial aplicadas a los videojuegos. Algunas de las técnicas que son objeto de estudio son la generación de niveles, como en la competición de Angry Birds; la minería de datos sacados de registros de juegos MMORPG (videojuego de rol multijugador masivo en línea) para predecir el compromiso económico de los jugadores, en la competición de minería de datos; el desarrollo de IA para desafíos de los juegos RTS (estrategia en tiempo real) tales como la incertidumbre, el procesado en tiempo real o el manejo de unidades, en la competición de StarCraft; o la investigación en PO (observabilidad parcial) en la competición de Ms. Pac-Man mediante el diseño de controladores para Pac-Man y el Equipo de fantasmas. Este trabajo se centra en esta última competición, y tiene como objetivo el desarrollo de una técnica híbrida consistente en un algoritmo genético y razonamiento basado en casos. El algoritmo genético se usa para generar y optimizar un conjunto de reglas que los fantasmas utilizan para jugar contra Ms. Pac-Man. Posteriormente, se realiza un estudio de los parámetros que intervienen en la ejecución del algoritmo genético, para ver como éstos afectan a los valores de fitness obtenidos por los agentes generados.Recently, the increase in the use of Arti cial Intelligence in di erent elds is leading to an increase in the research being carried out. One of these elds is videogames. Since the beginning of videogames, the user experience in terms of gameplay and graphics has prevailed, paying less attention to Arti cial Intelligence for creating more realistic agents and behaviours. Nowadays, due to the availability of better machines that can perform computationally expensive actions with less di culty, more complex Arti cial Intelligence techniques that provide games with better performance and more realism can be implemented. This is the case, for example, of creating intelligent agents that mimic human behaviour in a more realistic way. Di erent competitions are held ever Some of the techniques that are object for study are level generation, such as in the Angry Birds AI Competition, data mining from MMORPG (massively multiplayer online role-playing game) game logs to predict game players' economic engagement, in the Game Data Mining Competition; the development of RTS (Real-Time Strategy) game AI for solving challenging issues such as uncertainty, real-time process and unit management, in the StarCraft AI Competition; or the research into PO (Partial Observability) in the Ms. Pac-Man Vs Ghost Team Competition by designing agents for Ms. Pac-Man and the Ghost Team. This work is focused on this last competition, and has the objective of designing a hybrid technique consisting of a genetic algorithm and case-based reasoning. The genetic algorithm is used to generate and optimize set of rules that the Ghosts use ty year for research into AI techniques through videogames.o play against Ms. Pac-Man. Later, we perform an analysis of the parameters that intervene in the execution of the genetic algorithm to see how they a ect the tness values that the generated agents obtain by playing the game

    A Bayesian Model for RTS Units Control applied to StarCraft

    Get PDF
    International audienceIn real-time strategy games (RTS), the player must reason about high-level strategy and planning while having effective tactics and even individual units micro-management. Enabling an artificial agent to deal with such a task entails breaking down the complexity of this environment. For that, we propose to control units locally in the Bayesian sensory motor robot fashion, with higher level orders integrated as perceptions. As complete inference encompassing global strategy down to individual unit needs is intractable, we embrace incompleteness through a hierarchical model able to deal with uncertainty. We developed and applied our approach on a StarCraft AI

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Coevolutionary Approaches to Generating Robust Build-Orders for Real-Time Strategy Games

    Get PDF
    We aim to find winning build-orders for Real-Time Strategy games. Real-Time Strategy games provide a variety of challenges, from short-term control to longer term planning. We focus on a longer-term planning problem; which units to build and in what order to produce the units so a player successfully defeats the opponent. Plans which address unit construction scheduling problems in Real-Time Strategy games are called build-orders. A robust build-order defeats many opponents, while a strong build-order defeats opponents quickly. However, no single build-order defeats all other build-orders, and build-orders that defeat many opponents may still lose against a specific opponent. Other researchers have only investigated generating build-orders that defeat a specific opponent, rather than finding robust, strong build-orders. Additionally, previous research has not applied coevolutionary algorithms towards generating build-orders. In contrast, our research has three main contributions towards finding robust, strong build-orders. First, we apply a coevolutionary algorithm towards finding robust build-orders. Compared to exhaustive search, a genetic algorithm finds the strongest build-orders while a coevolutionary algorithm finds more robust build-orders. Second, we show that case-injection enables coevolution to learn from specific opponents while maintaining robustness. Build-orders produced with coevolution and case-injection learn to defeat or play like the injected build-orders. Third, we show that coevolved build-orders benefit from a representation which includes branches and loops. Coevolution will utilize multiple branches and loops to create build-orders that are stronger than build-orders without loops and branches. We believe this work provides evidence that coevolutionary algorithms may be a viable approach to creating robust, strong build-orders for Real-Time Strategy games

    Algorithms for Adaptive Game-playing Agents

    Get PDF
    • …
    corecore