6,919 research outputs found

    Using a Cognitive Architecture for Opponent Target Prediction

    No full text
    One of the most important aspects of a compelling game AI is that it anticipates the player’s actions and responds to them in a convincing manner. The first step towards doing this is to understand what the player is doing and predict their possible future actions. In this paper we show an approach where the AI system focusses on testing hypotheses made about the player’s actions using an implementation of a cognitive architecture inspired by the simulation theory of mind. The application used in this paper is to predict the target that the player is heading towards, in an RTS-style game. We improve the prediction accuracy and reduce the number of hypotheses needed by using path planning and path clustering

    Influence map-based pathfinding algorithms in video games

    Get PDF
    Path search algorithms, i.e., pathfinding algorithms, are used to solve shortest path problems by intelligent agents, ranging from computer games and applications to robotics. Pathfinding is a particular kind of search, in which the objective is to find a path between two nodes. A node is a point in space where an intelligent agent can travel. Moving agents in physical or virtual worlds is a key part of the simulation of intelligent behavior. If a game agent is not able to navigate through its surrounding environment without avoiding obstacles, it does not seem intelligent. Hence the reason why pathfinding is among the core tasks of AI in computer games. Pathfinding algorithms work well with single agents navigating through an environment. In realtime strategy (RTS) games, potential fields (PF) are used for multi-agent navigation in large and dynamic game environments. On the contrary, influence maps are not used in pathfinding. Influence maps are a spatial reasoning technique that helps bots and players to take decisions about the course of the game. Influence map represent game information, e.g., events and faction power distribution, and is ultimately used to provide game agents knowledge to take strategic or tactical decisions. Strategic decisions are based on achieving an overall goal, e.g., capture an enemy location and win the game. Tactical decisions are based on small and precise actions, e.g., where to install a turret, where to hide from the enemy. This dissertation work focuses on a novel path search method, that combines the state-of-theart pathfinding algorithms with influence maps in order to achieve better time performance and less memory space performance as well as more smooth paths in pathfinding.Algoritmos de pathfinding sĂŁo usados por agentes inteligentes para resolver o problema do caminho mais curto, desde a Ă rea jogos de computador atĂ© Ă  robĂłtica. Pathfinding Ă© um tipo particular de algoritmos de pesquisa, em que o objectivo Ă© encontrar o caminho mais curto entre dois nĂłs. Um nĂł Ă© um ponto no espaço onde um agente inteligente consegue navegar. Agentes mĂłveis em mundos fĂ­sicos e virtuais sĂŁo uma componente chave para a simulação de comportamento inteligente. Se um agente nĂŁo for capaz de navegar no ambiente que o rodeia sem colidir com obstĂĄculos, nĂŁo aparenta ser inteligente. Consequentemente, pathfinding faz parte das tarefas fundamentais de inteligencia artificial em vĂ­deo jogos. Algoritmos de pathfinding funcionam bem com agentes Ășnicos a navegar por um ambiente. Em jogos de estratĂ©gia em tempo real (RTS), potential fields (PF) sĂŁo utilizados para a navegação multi-agente em ambientes amplos e dinĂąmicos. Pelo contrĂĄrio, os influence maps nĂŁo sĂŁo usados no pathfinding. Influence maps sĂŁo uma tĂ©cnica de raciocĂ­nio espacial que ajudam agentes inteligentes e jogadores a tomar decisĂ”es sobre o decorrer do jogo. Influence maps representam informação de jogo, por exemplo, eventos e distribuição de poder, que sĂŁo usados para fornecer conhecimento aos agentes na tomada de decisĂ”es estratĂ©gicas ou tĂĄticas. As decisĂ”es estratĂ©gicas sĂŁo baseadas em atingir uma meta global, por exemplo, a captura de uma zona do inimigo e ganhar o jogo. DecisĂ”es tĂĄticas sĂŁo baseadas em acçÔes pequenas e precisas, por exemplo, em que local instalar uma torre de defesa, ou onde se esconder do inimigo. Esta dissertação foca-se numa nova tĂ©cnica que consiste em combinar algoritmos de pathfinding com influence maps, afim de alcançar melhores performances a nĂ­vel de tempo de pesquisa e consumo de memĂłria, assim como obter caminhos visualmente mais suaves

    A scouting strategy for real-time strategy games

    Full text link
    © 2014 ACM. Real-time strategy (RTS) is a sub-genre of strategy video games. RTS games are more realistic with dynamic and time-constraint game playing, by abandoning the turn-based rule of its ancestors. Playing with and against computer-controlled players is a pervasive phenomenon in RTS games, due to the convenience and the preference of groups of players. Hence, better game-playing agents are able to enhance game-playing experience by acting as smart opponents or collaborators. One-way of improving game-playing agents' performance, in terms of their economic-expansion and tactical battlefield-arrangement aspects, is to understand the game environment. Traditional commercial RTS game-playing agents address this issue by directly accessing game maps and extracting strategic features. Since human players are unable to access the same information, this is a form of "cheating AI", which has been known to negatively affect player experiences. Thus, we develop a scouting mechanism for RTS game-playing agents, in order to enable game units to explore game environments automatically in a realistic fashion. Our research is grounded in prior robotic exploration work by which we present a hierarchical multi-criterion decision-making (MCDM) strategy to address the incomplete information problem in RTS settings

    A cloud-based path-finding framework: Improving the performance of real-time navigation in games

    Get PDF
    This paper reviews current research in Cloud utilisation within games and finds that there is little beyond Cloud gaming and Cloud MMOs. To this end, a proof-of-concept Cloud-based Path-finding framework is introduced. This was developed to determine the practicality of relocating the computation for navigation problems from consumer-grade clients to powerful business-grade servers, with the aim of improving performance. The results gathered suggest that the solution might be impractical. However, because of the poor quality of the data, the results are largely inconclusive. Thus recommendations and questions for future research are posed.N/

    Symbolic Reasoning for Hearthstone

    Get PDF
    Trading-Card-Games are an interesting problem domain for Game AI, as they feature some challenges, such as highly variable game mechanics, that are not encountered in this intensity in many other genres. We present an expert system forming a player-level AI for the digital Trading-Card-Game Hearthstone. The bot uses a symbolic approach with a semantic structure, acting as an ontology, to represent both static descriptions of the game mechanics and dynamic game-state memories. Methods are introduced to reduce the amount of expert knowledge, such as popular moves or strategies, represented in the ontology, as the bot should derive such decisions in a symbolic way from its knowledge base. We narrow down the problem domain, selecting the relevant aspects for a play-to-win bot approach and comparing an ontology-driven approach to other approaches such as machine learning and case-based reasoning. Upon this basis, we describe how the semantic structure is linked with the game-state and how different aspects, such as memories, are encoded. An example will illustrate how the bot, at runtime, uses rules and queries on the semantic structure combined with a simple utility system to do reasoning and strategic planning. Finally, an evaluation is presented that was conducted by fielding the bot against the stock “Expert” AI that Hearthstone is shipped with, as well as Human opponents of various skill levels in order to assess how well the bot plays. Evaluating how believable the bot reasons is assessed through a Pseudo-Turing test

    Influence-based motion planning algorithms for games

    Get PDF
    In games, motion planning has to do with the motion of non-player characters (NPCs) from one place to another in the game world. In today’s video games there are two major approaches for motion planning, namely, path-finding and influence fields. Path-finding algorithms deal with the problem of finding a path in a weighted search graph, whose nodes represent locations of a game world, and in which the connections among nodes (edges) have an associated cost/weight. In video games, the most employed pathfinders are A* and its variants, namely, Dijkstra’s algorithm and best-first search. As further will be addressed in detail, the former pathfinders cannot simulate or mimic the natural movement of humans, which is usually without discontinuities, i.e., smooth, even when there are sudden changes in direction. Additionally, there is another problem with the former pathfinders, namely, their lack of adaptivity when changes to the environment occur. Therefore, such pathfinders are not adaptive, i.e., they cannot handle with search graph modifications during path search as a consequence of an event that happened in the game (e.g., when a bridge connecting two graph nodes is destroyed by a missile). On the other hand, influence fields are a motion planning technique that does not suffer from the two problems above, i.e., they can provide smooth human-like movement and are adaptive. As seen further ahead, we will resort to a differentiable real function to represent the influence field associated with a game map as a summation of functions equally differentiable, each associated to a repeller or an attractor. The differentiability ensures that there are no abrupt changes in the influence field, consequently, the movement of any NPC will be smooth, regardless if the NPC walks in the game world in the growing sense of the function or not. Thus, it is enough to have a spline curve that interpolates the path nodes to mimic the smooth human-like movement. Moreover, given the nature of the differentiable real functions that represent an influence field, the removal or addition of a repeller/attractor (as the result of the destruction or the construction of a bridge) does not alter the differentiability of the global function associated with the map of a game. That is to say that, an influence field is adaptive, in that it adapts to changes in the virtual world during the gameplay. In spite of being able to solve the two problems of pathfinders, an influence field may still have local extrema, which, if reached, will prevent an NPC from fleeing from that location. The local extremum problem never occurs in pathfinders because the goal node is the sole global minimum of the cost function. Therefore, by conjugating the cost function with the influence function, the NPC will never be detained at any local extremum of the influence function, because the minimization of the cost function ensures that it will always walk in the direction of the goal node. That is, the conjugation between pathfinders and influence fields results in movement planning algorithms which, simultaneously, solve the problems of pathfinders and influence fields. As will be demonstrated throughout this thesis, it is possible to combine influence fields and A*, Dijkstra’s, and best-first search algorithms, so that we get hybrid algorithms that are adaptive. Besides, these algorithms can generate smooth paths that resemble the ones traveled by human beings, though path smoothness is not the main focus of this thesis. Nevertheless, it is not always possible to perform this conjugation between influence fields and pathfinders; an example of such a pathfinder is the fringe search algorithm, as well as the new pathfinder which is proposed in this thesis, designated as best neighbor first search.Em jogos de vĂ­deo, o planeamento de movimento tem que ver com o movimento de NPCs (“Non-Player Characters”, do inglĂȘs) de um lugar para outro do mundo virtual de um jogo. Existem duas abordagens principais para o planeamento de movimento, nomeadamente descoberta de caminhos e campos de influĂȘncia. Os algoritmos de descoberta de caminhos lidam com o problema de encontrar um caminho num grafo de pesquisa pesado, cujos nĂłs representam localizaçÔes de um mapa de um jogo, e cujas ligaçÔes (arestas) entre nĂłs tĂȘm um custo/peso associado. Os algoritmos de descoberta de caminhos mais utilizados em jogos sĂŁo o A* e as suas variantes, nomeadamente, o algoritmo de Dijkstra e o algoritmo de pesquisa do melhor primeiro (“best-first search”, do inglĂȘs). Como se verĂĄ mais adiante, os algoritmos de descoberta de caminhos referidos nĂŁo permitem simular ou imitar o movimento natural dos seres humanos, que geralmente nĂŁo possui descontinuidades, i.e., o movimento Ă© suave mesmo quando hĂĄ mudanças repentinas de direcção. A juntar a este problema, existe um outro que afeta os algoritmos de descoberta de caminhos acima referidos, que tem que ver com a falta de adaptatividade destes algoritmos face a alteraçÔes ao mapa de um jogo. Ou seja, estes algoritmos nĂŁo sĂŁo adaptativos, pelo que nĂŁo permitem lidar com alteraçÔes ao grafo durante a pesquisa de um caminho em resultado de algum evento ocorrido no jogo (e.g., uma ponte que ligava dois nĂłs de um grafo foi destruĂ­da por um mĂ­ssil). Por outro lado, os campos de influĂȘncia sĂŁo uma tĂ©cnica de planeamento de movimento que nĂŁo padece dos dois problemas acima referidos, i.e., os campos possibilitam um movimento suave semelhante ao realizado pelo ser humano e sĂŁo adaptativos. Como se verĂĄ mais adiante, iremos recorrer a uma função real diferenciĂĄvel para representar o campo de influĂȘncia associado a um mapa de um jogo como um somatĂłrio de funçÔes igualmente diferenciĂĄveis, em que cada função estĂĄ associada a um repulsor ou a um atractor. A diferenciabilidade garante que nĂŁo existem alteraçÔes abruptas ao campo de influĂȘncia; consequentemente, o movimento de qualquer NPC serĂĄ suave, independentemente de o NPC caminhar no mapa de um jogo no sentido crescente ou no sentido decrescente da função. Assim, basta ter uma curva spline que interpola os nĂłs do caminho de forma a simular o movimento suave de um ser humano. AlĂ©m disso, dada a natureza das funçÔes reais diferenciĂĄveis que representam um campo de influĂȘncia, a remoção ou adição de um repulsor/atractor (como resultado da destruição ou construção de uma ponte) nĂŁo altera a diferenciabilidade da função global associada ao mapa de um jogo. Ou seja, um campo de influĂȘncia Ă© adaptativo, na medida em que se adapta a alteraçÔes que ocorram num mundo virtual durante uma sessĂŁo de jogo. Apesar de ser capaz de resolver os dois problemas dos algoritmos de descoberta de caminhos, um campo de influĂȘncia ainda pode ter extremos locais, que, se alcançados, impedirĂŁo um NPC de fugir desse local. O problema do extremo local nunca ocorre nos algoritmos de descoberta de caminhos porque o nĂł de destino Ă© o Ășnico mĂ­nimo global da função de custo. Portanto, ao conjugar a função de custo com a função de influĂȘncia, o NPC nunca serĂĄ retido num qualquer extremo local da função de influĂȘncia, porque a minimização da função de custo garante que ele caminhe sempre na direção do nĂł de destino. Ou seja, a conjugação entre algoritmos de descoberta de caminhos e campos de influĂȘncia tem como resultado algoritmos de planeamento de movimento que resolvem em simultĂąneo os problemas dos algoritmos de descoberta de caminhos e de campos de influĂȘncia. Como serĂĄ demonstrado ao longo desta tese, Ă© possĂ­vel combinar campos de influĂȘncia e o algoritmo A*, o algoritmo de Dijkstra, e o algoritmo da pesquisa pelo melhor primeiro, de modo a obter algoritmos hĂ­bridos que sĂŁo adaptativos. AlĂ©m disso, esses algoritmos podem gerar caminhos suaves que se assemelham aos que sĂŁo efetuados por seres humanos, embora a suavidade de caminhos nĂŁo seja o foco principal desta tese. No entanto, nem sempre Ă© possĂ­vel realizar essa conjugação entre os campos de influĂȘncia e os algoritmos de descoberta de caminhos; um exemplo Ă© o algoritmo de pesquisa na franja (“fringe search”, do inglĂȘs), bem como o novo algoritmo de pesquisa proposto nesta tese, que se designa por algoritmo de pesquisa pelo melhor vizinho primeiro (“best neighbor first search”, do inglĂȘs)

    Enhancing automated red teaming with Monte Carlo Tree Search

    Get PDF
    This study has investigated novel Automated Red Teaming methods that support replanning. Traditional Automated Red Teaming (ART) approaches usually use evolutionary computing methods for evolving plans using simulations. A drawback of this method is the inability to change a team’s strategy part way through a simulation. This study focussed on a Monte-Carlo Tree Search (MCTS) method in an ART environment that supports re-planning to lead to better strategy decisions and a higher average scor

    Evolving Effective Micro Behaviors for Real-Time Strategy Games

    Get PDF
    Real-Time Strategy games have become a new frontier of artificial intelligence research. Advances in real-time strategy game AI, like with chess and checkers before, will significantly advance the state of the art in AI research. This thesis aims to investigate using heuristic search algorithms to generate effective micro behaviors in combat scenarios for real-time strategy games. Macro and micro management are two key aspects of real-time strategy games. While good macro helps a player collect more resources and build more units, good micro helps a player win skirmishes against equal numbers of opponent units or win even when outnumbered. In this research, we use influence maps and potential fields as a basis representation to evolve micro behaviors. We first compare genetic algorithms against two types of hill climbers for generating competitive unit micro management. Second, we investigated the use of case-injected genetic algorithms to quickly and reliably generate high quality micro behaviors. Then we compactly encoded micro behaviors including influence maps, potential fields, and reactive control into fourteen parameters and used genetic algorithms to search for a complete micro bot, ECSLBot. We compare the performance of our ECSLBot with two state of the art bots, UAlbertaBot and Nova, on several skirmish scenarios in a popular real-time strategy game StarCraft. The results show that the ECSLBot tuned by genetic algorithms outperforms UAlbertaBot and Nova in kiting efficiency, target selection, and fleeing. In addition, the same approach works to create competitive micro behaviors in another game SeaCraft. Using parallelized genetic algorithms to evolve parameters in SeaCraft we are able to speed up the evolutionary process from twenty one hours to nine minutes. We believe this work provides evidence that genetic algorithms and our representation may be a viable approach to creating effective micro behaviors for winning skirmishes in real-time strategy games
    • 

    corecore