366,154 research outputs found

    PENERAPAN ALGORITMA PATHFINDING A* DALAM GAME DUAL LEGACY BERBASIS ANDROID

    Get PDF
    In game there are 2 types of characters, player characters and NPC (Non-Playable Character) characters that cannot be controlled by the player. To overcome the static nature of NPC, artificial intelligence / algorithms are given. The algorithm given to NPC is the A Star (A*) algorithm. The A Star (A*) algorithm is a pathfinding algorithm in the case of finding closest path to the player and avoiding existing obstacles. The enemy NPC is tasked with chasing the player and must also kill/reduce the player's health. The A* algorithm will calculate the distance of one of the paths, then save it and then calculate the distance of the other paths. When all paths have been calculated, the algorithm will choose the shortest path. Applying the A* pathfinding algorithm to the Android-based Dual Legacy 2D Side-Scrolling RPG npc game where it is hoped that with this algorithm the npc can search and chase players through the closest path. The conclusion of this research is the A Star Algorithm has been successfully implemented for enemy npc to approach the player if the player enters the enemy's calculation range and the enemy npc approaches the player from the shortest distance by avoiding existing obstacles

    Influence-based motion planning algorithms for games

    Get PDF
    In games, motion planning has to do with the motion of non-player characters (NPCs) from one place to another in the game world. In today’s video games there are two major approaches for motion planning, namely, path-finding and influence fields. Path-finding algorithms deal with the problem of finding a path in a weighted search graph, whose nodes represent locations of a game world, and in which the connections among nodes (edges) have an associated cost/weight. In video games, the most employed pathfinders are A* and its variants, namely, Dijkstra’s algorithm and best-first search. As further will be addressed in detail, the former pathfinders cannot simulate or mimic the natural movement of humans, which is usually without discontinuities, i.e., smooth, even when there are sudden changes in direction. Additionally, there is another problem with the former pathfinders, namely, their lack of adaptivity when changes to the environment occur. Therefore, such pathfinders are not adaptive, i.e., they cannot handle with search graph modifications during path search as a consequence of an event that happened in the game (e.g., when a bridge connecting two graph nodes is destroyed by a missile). On the other hand, influence fields are a motion planning technique that does not suffer from the two problems above, i.e., they can provide smooth human-like movement and are adaptive. As seen further ahead, we will resort to a differentiable real function to represent the influence field associated with a game map as a summation of functions equally differentiable, each associated to a repeller or an attractor. The differentiability ensures that there are no abrupt changes in the influence field, consequently, the movement of any NPC will be smooth, regardless if the NPC walks in the game world in the growing sense of the function or not. Thus, it is enough to have a spline curve that interpolates the path nodes to mimic the smooth human-like movement. Moreover, given the nature of the differentiable real functions that represent an influence field, the removal or addition of a repeller/attractor (as the result of the destruction or the construction of a bridge) does not alter the differentiability of the global function associated with the map of a game. That is to say that, an influence field is adaptive, in that it adapts to changes in the virtual world during the gameplay. In spite of being able to solve the two problems of pathfinders, an influence field may still have local extrema, which, if reached, will prevent an NPC from fleeing from that location. The local extremum problem never occurs in pathfinders because the goal node is the sole global minimum of the cost function. Therefore, by conjugating the cost function with the influence function, the NPC will never be detained at any local extremum of the influence function, because the minimization of the cost function ensures that it will always walk in the direction of the goal node. That is, the conjugation between pathfinders and influence fields results in movement planning algorithms which, simultaneously, solve the problems of pathfinders and influence fields. As will be demonstrated throughout this thesis, it is possible to combine influence fields and A*, Dijkstra’s, and best-first search algorithms, so that we get hybrid algorithms that are adaptive. Besides, these algorithms can generate smooth paths that resemble the ones traveled by human beings, though path smoothness is not the main focus of this thesis. Nevertheless, it is not always possible to perform this conjugation between influence fields and pathfinders; an example of such a pathfinder is the fringe search algorithm, as well as the new pathfinder which is proposed in this thesis, designated as best neighbor first search.Em jogos de vĂ­deo, o planeamento de movimento tem que ver com o movimento de NPCs (“Non-Player Characters”, do inglĂȘs) de um lugar para outro do mundo virtual de um jogo. Existem duas abordagens principais para o planeamento de movimento, nomeadamente descoberta de caminhos e campos de influĂȘncia. Os algoritmos de descoberta de caminhos lidam com o problema de encontrar um caminho num grafo de pesquisa pesado, cujos nĂłs representam localizaçÔes de um mapa de um jogo, e cujas ligaçÔes (arestas) entre nĂłs tĂȘm um custo/peso associado. Os algoritmos de descoberta de caminhos mais utilizados em jogos sĂŁo o A* e as suas variantes, nomeadamente, o algoritmo de Dijkstra e o algoritmo de pesquisa do melhor primeiro (“best-first search”, do inglĂȘs). Como se verĂĄ mais adiante, os algoritmos de descoberta de caminhos referidos nĂŁo permitem simular ou imitar o movimento natural dos seres humanos, que geralmente nĂŁo possui descontinuidades, i.e., o movimento Ă© suave mesmo quando hĂĄ mudanças repentinas de direcção. A juntar a este problema, existe um outro que afeta os algoritmos de descoberta de caminhos acima referidos, que tem que ver com a falta de adaptatividade destes algoritmos face a alteraçÔes ao mapa de um jogo. Ou seja, estes algoritmos nĂŁo sĂŁo adaptativos, pelo que nĂŁo permitem lidar com alteraçÔes ao grafo durante a pesquisa de um caminho em resultado de algum evento ocorrido no jogo (e.g., uma ponte que ligava dois nĂłs de um grafo foi destruĂ­da por um mĂ­ssil). Por outro lado, os campos de influĂȘncia sĂŁo uma tĂ©cnica de planeamento de movimento que nĂŁo padece dos dois problemas acima referidos, i.e., os campos possibilitam um movimento suave semelhante ao realizado pelo ser humano e sĂŁo adaptativos. Como se verĂĄ mais adiante, iremos recorrer a uma função real diferenciĂĄvel para representar o campo de influĂȘncia associado a um mapa de um jogo como um somatĂłrio de funçÔes igualmente diferenciĂĄveis, em que cada função estĂĄ associada a um repulsor ou a um atractor. A diferenciabilidade garante que nĂŁo existem alteraçÔes abruptas ao campo de influĂȘncia; consequentemente, o movimento de qualquer NPC serĂĄ suave, independentemente de o NPC caminhar no mapa de um jogo no sentido crescente ou no sentido decrescente da função. Assim, basta ter uma curva spline que interpola os nĂłs do caminho de forma a simular o movimento suave de um ser humano. AlĂ©m disso, dada a natureza das funçÔes reais diferenciĂĄveis que representam um campo de influĂȘncia, a remoção ou adição de um repulsor/atractor (como resultado da destruição ou construção de uma ponte) nĂŁo altera a diferenciabilidade da função global associada ao mapa de um jogo. Ou seja, um campo de influĂȘncia Ă© adaptativo, na medida em que se adapta a alteraçÔes que ocorram num mundo virtual durante uma sessĂŁo de jogo. Apesar de ser capaz de resolver os dois problemas dos algoritmos de descoberta de caminhos, um campo de influĂȘncia ainda pode ter extremos locais, que, se alcançados, impedirĂŁo um NPC de fugir desse local. O problema do extremo local nunca ocorre nos algoritmos de descoberta de caminhos porque o nĂł de destino Ă© o Ășnico mĂ­nimo global da função de custo. Portanto, ao conjugar a função de custo com a função de influĂȘncia, o NPC nunca serĂĄ retido num qualquer extremo local da função de influĂȘncia, porque a minimização da função de custo garante que ele caminhe sempre na direção do nĂł de destino. Ou seja, a conjugação entre algoritmos de descoberta de caminhos e campos de influĂȘncia tem como resultado algoritmos de planeamento de movimento que resolvem em simultĂąneo os problemas dos algoritmos de descoberta de caminhos e de campos de influĂȘncia. Como serĂĄ demonstrado ao longo desta tese, Ă© possĂ­vel combinar campos de influĂȘncia e o algoritmo A*, o algoritmo de Dijkstra, e o algoritmo da pesquisa pelo melhor primeiro, de modo a obter algoritmos hĂ­bridos que sĂŁo adaptativos. AlĂ©m disso, esses algoritmos podem gerar caminhos suaves que se assemelham aos que sĂŁo efetuados por seres humanos, embora a suavidade de caminhos nĂŁo seja o foco principal desta tese. No entanto, nem sempre Ă© possĂ­vel realizar essa conjugação entre os campos de influĂȘncia e os algoritmos de descoberta de caminhos; um exemplo Ă© o algoritmo de pesquisa na franja (“fringe search”, do inglĂȘs), bem como o novo algoritmo de pesquisa proposto nesta tese, que se designa por algoritmo de pesquisa pelo melhor vizinho primeiro (“best neighbor first search”, do inglĂȘs)

    Search for an Immobile Hider on a Stochastic Network

    Full text link
    Harry hides on an edge of a graph and does not move from there. Sally, starting from a known origin, tries to find him as soon as she can. Harry's goal is to be found as late as possible. At any given time, each edge of the graph is either active or inactive, independently of the other edges, with a known probability of being active. This situation can be modeled as a zero-sum two-person stochastic game. We show that the game has a value and we provide upper and lower bounds for this value. Finally, by generalizing optimal strategies of the deterministic case, we provide more refined results for trees and Eulerian graphs.Comment: 28 pages, 9 figure

    Towards a theory of heuristic and optimal planning for sequential information search

    No full text
    • 

    corecore