182 research outputs found
Artificial and Computational Intelligence in Games (Dagstuhl Seminar 12191)
This report documents the program and the outcomes of Dagstuhl Seminar 12191 "Artificial and Computational Intelligence in Games". The aim for the seminar was to bring together creative experts in an intensive meeting with the common goals of gaining a deeper understanding of various aspects of artificial and computational intelligence in games, to help identify the main challenges in game AI research and the most promising venues to deal with them. This was accomplished mainly by means of workgroups on 14 different topics (ranging from search, learning, and modeling to architectures, narratives, and evaluation), and plenary discussions on the results of the workgroups. This report presents the conclusions that each of the workgroups reached. We also added short descriptions of the few talks that were unrelated to any of the workgroups
Scare Tactics
It is the purpose of this document to describe the design and development processes of Scare Tactics. The game will be discussed in further detail as it relates to several areas, such as market analysis, development process, game design, technical design, and each team members’ individual area of background research. The research areas include asymmetrical game design, level design, game engine architecture, real-time graphics, user interface design, networking and artificial intelligence.
As part of the team’s market analysis, other games featuring asymmetric gameplay are discussed. The games described in this section serve as inspirations for asymmetric game design. Some of these games implement mechanics that the team seeks to emulate and expand upon in Scare Tactics.
As part of the team’s development process, several concepts were prototyped over the course of two months. During that process the team adopted an Agile methodology in order to assist with scheduling, communication and resource management. Eventually, the team chose to expand upon the prototype that became the basis of Scare Tactics.
Game design and technical design occur concurrently in the development of Scare Tactics. Designers conduct discussions where themes, settings, and mechanics are conceived and documented. Mechanics are prototyped in Unity and eventually ported to a proprietary engine developed by our team. Throughout the course of development, each team member has had to own an area of design or development. This has led to individual research performed in several areas, which will be discussed further in this document
Influence map-based pathfinding algorithms in video games
Path search algorithms, i.e., pathfinding algorithms, are used to solve shortest path problems
by intelligent agents, ranging from computer games and applications to robotics. Pathfinding
is a particular kind of search, in which the objective is to find a path between two nodes. A
node is a point in space where an intelligent agent can travel. Moving agents in physical or
virtual worlds is a key part of the simulation of intelligent behavior. If a game agent is not able
to navigate through its surrounding environment without avoiding obstacles, it does not seem
intelligent. Hence the reason why pathfinding is among the core tasks of AI in computer games.
Pathfinding algorithms work well with single agents navigating through an environment. In realtime
strategy (RTS) games, potential fields (PF) are used for multi-agent navigation in large
and dynamic game environments. On the contrary, influence maps are not used in pathfinding.
Influence maps are a spatial reasoning technique that helps bots and players to take decisions
about the course of the game. Influence map represent game information, e.g., events and
faction power distribution, and is ultimately used to provide game agents knowledge to take
strategic or tactical decisions. Strategic decisions are based on achieving an overall goal, e.g.,
capture an enemy location and win the game. Tactical decisions are based on small and precise
actions, e.g., where to install a turret, where to hide from the enemy.
This dissertation work focuses on a novel path search method, that combines the state-of-theart
pathfinding algorithms with influence maps in order to achieve better time performance and
less memory space performance as well as more smooth paths in pathfinding.Algoritmos de pathfinding são usados por agentes inteligentes para resolver o problema do caminho
mais curto, desde a àrea jogos de computador até à robótica. Pathfinding é um tipo
particular de algoritmos de pesquisa, em que o objectivo é encontrar o caminho mais curto
entre dois nós. Um nó é um ponto no espaço onde um agente inteligente consegue navegar.
Agentes móveis em mundos físicos e virtuais são uma componente chave para a simulação de
comportamento inteligente. Se um agente não for capaz de navegar no ambiente que o rodeia
sem colidir com obstáculos, não aparenta ser inteligente. Consequentemente, pathfinding faz
parte das tarefas fundamentais de inteligencia artificial em vídeo jogos.
Algoritmos de pathfinding funcionam bem com agentes únicos a navegar por um ambiente. Em
jogos de estratégia em tempo real (RTS), potential fields (PF) são utilizados para a navegação
multi-agente em ambientes amplos e dinâmicos. Pelo contrário, os influence maps não são usados
no pathfinding. Influence maps são uma técnica de raciocínio espacial que ajudam agentes
inteligentes e jogadores a tomar decisões sobre o decorrer do jogo. Influence maps representam
informação de jogo, por exemplo, eventos e distribuição de poder, que são usados para
fornecer conhecimento aos agentes na tomada de decisões estratégicas ou táticas. As decisões
estratégicas são baseadas em atingir uma meta global, por exemplo, a captura de uma zona
do inimigo e ganhar o jogo. Decisões táticas são baseadas em acções pequenas e precisas, por
exemplo, em que local instalar uma torre de defesa, ou onde se esconder do inimigo.
Esta dissertação foca-se numa nova técnica que consiste em combinar algoritmos de pathfinding
com influence maps, afim de alcançar melhores performances a nível de tempo de pesquisa e
consumo de memória, assim como obter caminhos visualmente mais suaves
Video Game AI Algorithms
The ubiquity of human-like characters in video games presents the challenge of implementing human-like behaviors. To address the pathfinding and behavior selection problems faced in a real project, we came up with two improved methods based upon mainstream solutions. To make pathfinding agent take into account more incentives than only a destination, We designed a new pathfinding algorithm named Cost Radiation A* (CRA*), based on the A* heuristic search algorithm. CRA* incorporates the agent\u27s preference for other objects, represented as cost radiators in our scheme. We also want to enable non-player characters (NPCs) to learn in real-time in response to a player\u27s actions. We adopt the behavior tree framework, and design a new composite node for it, named learner node, which enables developers to design learning behaviors. The learner node achieves basic reinforcement learning but is also open to more sophisticated use
A Partially Automated Process For the Generation of Believable Human Behaviors
Modeling believable human behavior for use in simulations is a difficult task. It re- quires a great deal of time, and frequently requires coordination between members of different disciplines. In our research, we propose a method of partially automating the process, reducing the time it takes to create the model, and more easily allowing domain experts that are not programmers to adjust the models as necessary. Using Agent-Based modeling, we present MAGIC (Models Automatically Generated from Information Collected), an algorithm designed to automatically find points in the model’s decision process that require interaction with other agents or with the sim- ulation envionment and create a decision graph that contains the agent’s behavior pattern based upon raw data composed of time-sequential observations. We also present an alternative to the traditional Markov Decision Process that allows actions to be completed until a set condition is met, and a tool to allow domain experts to easily adjust the resulting models as needed. After testing the accuracy of our algorithm using synthetic data, we show the results of this process when it is used in a real-world simulation based upon a study of the medical administration pro- cess in hospitals conducted by the University of South Carolina’s Healthcare Process Redesign Center.
In the healthcare study, it was necessary for the nurses to follow a very consistent process. In order to show the ability to use our algorithm in a variety of situations, we create a video game and record players’ movements. However, unlike the nursing simulation, the environment in the game simulation is more prone to changes that limit the appropriate set of actions taken by the humans being modeled. In order to account for the changes in the simulation, we present a simple method using the addition of a hierarchy of rules with our previous algorithm to limit the actions taken by the agent to ones that are appropriate for the current situation.
In both the healthcare study and the video game, we find that there are multiple distinct patterns of behavior. As a single model would not accurately represent the behavior of all of the humans in the studies, we present a simple method of classifying the behavior of individuals using the decision graphs created by our algorithm. We then use our algorithm to create models for each cluster of behaviors, producing multiple models from one set of observational data.
Believability is highly subjective. In our research, we present methods to partially automate the process of producing believable human agents, and test our results with real-world data using focus groups and a pseudo-Turing test. Our findings show that under the right conditions, it is possible to partially automate the modeling of human decision processes, but ultimately, believability is greatly dependent upon the similarity between the viewer and the humans being modeled
Influence-based motion planning algorithms for games
In games, motion planning has to do with the motion of non-player characters (NPCs)
from one place to another in the game world. In today’s video games there are two
major approaches for motion planning, namely, path-finding and influence fields.
Path-finding algorithms deal with the problem of finding a path in a weighted search
graph, whose nodes represent locations of a game world, and in which the connections
among nodes (edges) have an associated cost/weight. In video games, the most employed
pathfinders are A* and its variants, namely, Dijkstra’s algorithm and best-first
search. As further will be addressed in detail, the former pathfinders cannot simulate
or mimic the natural movement of humans, which is usually without discontinuities,
i.e., smooth, even when there are sudden changes in direction.
Additionally, there is another problem with the former pathfinders, namely, their lack
of adaptivity when changes to the environment occur. Therefore, such pathfinders
are not adaptive, i.e., they cannot handle with search graph modifications during path
search as a consequence of an event that happened in the game (e.g., when a bridge
connecting two graph nodes is destroyed by a missile).
On the other hand, influence fields are a motion planning technique that does not suffer
from the two problems above, i.e., they can provide smooth human-like movement and
are adaptive. As seen further ahead, we will resort to a differentiable real function to
represent the influence field associated with a game map as a summation of functions
equally differentiable, each associated to a repeller or an attractor. The differentiability
ensures that there are no abrupt changes in the influence field, consequently, the
movement of any NPC will be smooth, regardless if the NPC walks in the game world in
the growing sense of the function or not. Thus, it is enough to have a spline curve that
interpolates the path nodes to mimic the smooth human-like movement.
Moreover, given the nature of the differentiable real functions that represent an influence
field, the removal or addition of a repeller/attractor (as the result of the destruction
or the construction of a bridge) does not alter the differentiability of the global
function associated with the map of a game. That is to say that, an influence field is
adaptive, in that it adapts to changes in the virtual world during the gameplay.
In spite of being able to solve the two problems of pathfinders, an influence field may
still have local extrema, which, if reached, will prevent an NPC from fleeing from that
location. The local extremum problem never occurs in pathfinders because the goal
node is the sole global minimum of the cost function. Therefore, by conjugating the
cost function with the influence function, the NPC will never be detained at any local
extremum of the influence function, because the minimization of the cost function
ensures that it will always walk in the direction of the goal node. That is, the conjugation
between pathfinders and influence fields results in movement planning algorithms which, simultaneously, solve the problems of pathfinders and influence fields.
As will be demonstrated throughout this thesis, it is possible to combine influence fields
and A*, Dijkstra’s, and best-first search algorithms, so that we get hybrid algorithms
that are adaptive. Besides, these algorithms can generate smooth paths that resemble
the ones traveled by human beings, though path smoothness is not the main focus of
this thesis. Nevertheless, it is not always possible to perform this conjugation between
influence fields and pathfinders; an example of such a pathfinder is the fringe search
algorithm, as well as the new pathfinder which is proposed in this thesis, designated as
best neighbor first search.Em jogos de vídeo, o planeamento de movimento tem que ver com o movimento de
NPCs (“Non-Player Characters”, do inglês) de um lugar para outro do mundo virtual
de um jogo. Existem duas abordagens principais para o planeamento de movimento,
nomeadamente descoberta de caminhos e campos de influência.
Os algoritmos de descoberta de caminhos lidam com o problema de encontrar um caminho
num grafo de pesquisa pesado, cujos nós representam localizações de um mapa
de um jogo, e cujas ligações (arestas) entre nós têm um custo/peso associado. Os
algoritmos de descoberta de caminhos mais utilizados em jogos são o A* e as suas variantes,
nomeadamente, o algoritmo de Dijkstra e o algoritmo de pesquisa do melhor
primeiro (“best-first search”, do inglês). Como se verá mais adiante, os algoritmos de
descoberta de caminhos referidos não permitem simular ou imitar o movimento natural
dos seres humanos, que geralmente não possui descontinuidades, i.e., o movimento é
suave mesmo quando há mudanças repentinas de direcção.
A juntar a este problema, existe um outro que afeta os algoritmos de descoberta de
caminhos acima referidos, que tem que ver com a falta de adaptatividade destes algoritmos
face a alterações ao mapa de um jogo. Ou seja, estes algoritmos não são
adaptativos, pelo que não permitem lidar com alterações ao grafo durante a pesquisa
de um caminho em resultado de algum evento ocorrido no jogo (e.g., uma ponte que
ligava dois nós de um grafo foi destruída por um míssil).
Por outro lado, os campos de influência são uma técnica de planeamento de movimento
que não padece dos dois problemas acima referidos, i.e., os campos possibilitam um
movimento suave semelhante ao realizado pelo ser humano e são adaptativos. Como
se verá mais adiante, iremos recorrer a uma função real diferenciável para representar
o campo de influência associado a um mapa de um jogo como um somatório de
funções igualmente diferenciáveis, em que cada função está associada a um repulsor
ou a um atractor. A diferenciabilidade garante que não existem alterações abruptas
ao campo de influência; consequentemente, o movimento de qualquer NPC será suave,
independentemente de o NPC caminhar no mapa de um jogo no sentido crescente ou
no sentido decrescente da função. Assim, basta ter uma curva spline que interpola os
nós do caminho de forma a simular o movimento suave de um ser humano.
Além disso, dada a natureza das funções reais diferenciáveis que representam um campo
de influência, a remoção ou adição de um repulsor/atractor (como resultado da destruição
ou construção de uma ponte) não altera a diferenciabilidade da função global associada
ao mapa de um jogo. Ou seja, um campo de influência é adaptativo, na medida
em que se adapta a alterações que ocorram num mundo virtual durante uma sessão de
jogo.
Apesar de ser capaz de resolver os dois problemas dos algoritmos de descoberta de caminhos, um campo de influência ainda pode ter extremos locais, que, se alcançados,
impedirão um NPC de fugir desse local. O problema do extremo local nunca ocorre
nos algoritmos de descoberta de caminhos porque o nó de destino é o único mínimo
global da função de custo. Portanto, ao conjugar a função de custo com a função de
influência, o NPC nunca será retido num qualquer extremo local da função de influência,
porque a minimização da função de custo garante que ele caminhe sempre na direção
do nó de destino. Ou seja, a conjugação entre algoritmos de descoberta de caminhos
e campos de influência tem como resultado algoritmos de planeamento de movimento
que resolvem em simultâneo os problemas dos algoritmos de descoberta de caminhos e
de campos de influência.
Como será demonstrado ao longo desta tese, é possível combinar campos de influência
e o algoritmo A*, o algoritmo de Dijkstra, e o algoritmo da pesquisa pelo melhor
primeiro, de modo a obter algoritmos híbridos que são adaptativos. Além disso, esses
algoritmos podem gerar caminhos suaves que se assemelham aos que são efetuados por
seres humanos, embora a suavidade de caminhos não seja o foco principal desta tese.
No entanto, nem sempre é possível realizar essa conjugação entre os campos de influência
e os algoritmos de descoberta de caminhos; um exemplo é o algoritmo de pesquisa
na franja (“fringe search”, do inglês), bem como o novo algoritmo de pesquisa proposto
nesta tese, que se designa por algoritmo de pesquisa pelo melhor vizinho primeiro (“best
neighbor first search”, do inglês)
Evaluating the Effects on Monte Carlo Tree Search of Predicting Co-operative Agent Behaviour
This thesis explores the effects of including an agent-modelling strategy into Monte-Carlo Tree Search. This is to explore how the effects of such modelling might be used to increase the performance of agents in co-operative environments such as games.
The research is conducted using two applications. The first is a co-operative 2-player puzzle game in which a perfect model outperforms an agent that makes the assumption the other agent plays randomly. The second application is the partially observable co-operative card game Hanabi, in which the predictor variant is able to outperform both a standard variant of MCTS and a version that assumes a fixed-strategy for the paired agents. This thesis also investigates a technique for learning player strategies off-line based on saved game logs for use in modelling
Improving Computer Game Bots\u27 behavior using Q-Learning
In modern computer video games, the quality of artificial characters plays a prominent role in the success of the game in the market. The aim of intelligent techniques, termed game AI, used in these games is to provide an interesting and challenging game play to a game player. Being highly sophisticated, these games present game developers with similar kind of requirements and challenges as faced by academic AI community. The game companies claim to use sophisticated game AI to model artificial characters such as computer game bots, intelligent realistic AI agents. However, these bots work via simple routines pre-programmed to suit the game map, game rules, game type, and other parameters unique to each game. Mostly, illusive intelligent behaviors are programmed using simple conditional statements and are hard-coded in the bots\u27 logic. Moreover, a game programmer has to spend considerable time configuring crisp inputs for these conditional statements. Therefore, we realize a need for machine learning techniques to dynamically improve bots\u27 behavior and save precious computer programmers\u27 man-hours. So, we selected Q-learning, a reinforcement learning technique, to evolve dynamic intelligent bots, as it is a simple, efficient, and online learning algorithm. Machine learning techniques such as reinforcement learning are know to be intractable if they use a detailed model of the world, and also requires tuning of various parameters to give satisfactory performance. Therefore, for this research we opt to examine Q-learning for evolving a few basic behaviors viz. learning to fight, and planting the bomb for computer game bots. Furthermore, we experimented on how bots would use knowledge learned from abstract models to evolve its behavior in more detailed model of the world. Bots evolved using these techniques would become more pragmatic, believable and capable of showing human-like behavior. This will provide more realistic feel to the game and provide game programmers with an efficient learning technique for programming these bots
Autonomous characters in virtual environments: The technologies involved in artificial life and their affects on perceived intelligence and playability of computer games
Computer games are viewed by academics as un֊grounded hack and patch experiments. "The industry lacks the formalism and requirement for a "perfect" solution often necessary in the academic world " [Woob]. Academic Artifical Intelligence (AI) is often viewed as un-implementable and narrow minded by the majority of ทon-AI programmer. "Historically, AI tended to be focused, containing detailed problems and domain-specific techniques. This focus makes for easier study - or engineering - of particular solutions. " [СһаОЗ .By implementing several well known AI techniques into the same gaming environment and judging users reactions this project aims to make links between the academic nature of AI, as well as investigate the nature of practical implementation in a gaming environment. An online Java implemented version of the 1970'ร classic Space Invaders has been developed and tested, with the Aliens being controlled by 6 different approaches to modelling AI functions. In total information from 334 individuals games was recorded. Different types of games AI can create highly varied gaming experience as highlighted by the range of values and high standard deviation values seen in the results. The link between complex behaviour, complex control systems and perceived intelligence was not supported. A positive correlation identified between how fun the users found the game and how intelligent they perceived the Aliens to be, would seem to be logical. As games get visually more and more impressive, the need for intelligent characters cannot be denied because it is one of the few way in which games can set themselves apart from the competition. Conclusions identified that computer games must remain focussed on their end- goal, that of producing a fun game. Whilst complex and clever AI can help to achieve it, the AI itself can never overshadow the end result
Recommended from our members
Modeling and formal verification of gaming storylines
Video games are becoming more and more interactive with increasingly complex plots. These plots typically involve multiple parallel storylines that may converge and diverge based on player actions. This may lead to situations that are inconsistent or impassable. Current techniques for planning and testing game plots involve naive means such as text documents, spreadsheets, and critical path testing. Recent academic research [1] [2] [3] examines the design planning problems, but neglect testing and verification of the possible plot lines. These complex plots have thus until now been handled inadequately due to a lack of a formal methodology and tools to support them. In this dissertation, we describe how we develop methods to 1) characterize storylines (SChar), 2) define a storyline description language (SDL), and 3) create a storyline verification tool based in formal verification techniques (StoCk) that use our SDL as input. SChar (Storyline Characterization) help game developers characterize the category of story line they are working on (e.g. linear, branching and plot) through a tool that give a set of guided questions. Our SDL allows its users to describe storylines in a consistent format similar to how they reason about storylines, but in such a way that it can be used for formal verification. StoCk accepts storylines, described in SDL, to be formally verified using SPIN for errors. StoCk is also examined in three common use cases found in the gaming industry used as a tool 1) during storyline creation 2) during quality assurance and 3) during storyline implementation. The combination of SChar, SDL, and StoCk provides designers, writers, and developers a novel methodology and tools to verify consistency in large and complex game plots.Electrical and Computer Engineerin
- …