16 research outputs found
Hierarchical reinforcement learning for real-time strategy games
Real-Time Strategy (RTS) games can be abstracted to resource allocation applicable in many fields and industries. We consider a simplified custom RTS game focused on mid-level combat using reinforcement learning (RL) algorithms. There are a number of contributions to game playing with RL in this paper. First, we combine hierarchical RL with a multi-layer perceptron (MLP) that receives higher-order inputs for increased learning speed and performance. Second, we compare Q-learning against Monte Carlo learning as reinforcement learning algorithms. Third, because the teams in the RTS game are multi-agent systems, we examine two different methods for assigning rewards to agents. Experiments are performed against two different fixed opponents. The results show that the combination of Q-learning and individual rewards yields the highest win-rate against the different opponents, and is able to defeat the opponent within 26 training games
Recommended from our members
Chain of command in autonomous cooperative agents for battles in real-time strategy games
This paper investigates incorporating chain of command in swarm intelligence of honey bees to create groups of ranked co-operative autonomous agents for an RTS game in to create and re-enact battle simulations. The behaviour of the agents are based on the foraging and defensive behaviours of honey bees, adapted to a human environment. The chain of command is implemented using a hierarchical decision model. The groups consist of multiple model-based reflex agents, with individual blackboards for working memory, with a colony level blackboard to mimic the foraging patterns and include commands received from ranking agents. An agent architecture and environment are proposed that allows for creation of autonomous cooperative agents. The behaviour of agents is then evaluated both mathematically and empirically using an adaptation of anytime universal intelligence test and agent believability metric
A panorama of artificial and computational intelligence in games
This paper attempts to give a high-level overview
of the field of artificial and computational intelligence (AI/CI)
in games, with particular reference to how the different core
research areas within this field inform and interact with each
other, both actually and potentially. We identify ten main
research areas within this field: NPC behavior learning, search
and planning, player modeling, games as AI benchmarks,
procedural content generation, computational narrative, believable
agents, AI-assisted game design, general game artificial
intelligence and AI in commercial games. We view and analyze
the areas from three key perspectives: (1) the dominant AI
method(s) used under each area; (2) the relation of each area
with respect to the end (human) user; and (3) the placement of
each area within a human-computer (player-game) interaction
perspective. In addition, for each of these areas we consider how
it could inform or interact with each of the other areas; in those
cases where we find that meaningful interaction either exists or
is possible, we describe the character of that interaction and
provide references to published studies, if any. We believe that
this paper improves understanding of the current nature of the
game AI/CI research field and the interdependences between
its core areas by providing a unifying overview. We also believe
that the discussion of potential interactions between research
areas provides a pointer to many interesting future research
projects and unexplored subfields.peer-reviewe
Fifth Aeon – A.I Competition and Balancer
Collectible Card Games (CCG) are one of the most popular types of games in both digital and physical space. Despite their popularity, there is a great deal of room for exploration into the application of artificial intelligence in order to enhance CCG gameplay and development. This paper presents Fifth Aeon a novel and open source CCG built to run in browsers and two A.I applications built upon Fifth Aeon. The first application is an artificial intelligence competition run on the Fifth Aeon game. The second is an automatic balancing system capable of helping a designer create new cards that do not upset the balance of an existing collectible card game. The submissions to the A.I competition include one that plays substantially better than the existing Fifth Aeon A.I with a higher winrate across multiple game formats. The balancer system also demonstrates an ability to automatically balance several types of cards against a wide variety of parameters. These results help pave the way to cheaper CCG development with more compelling A.I opponents
Pelikokemuksen osa-alueiden yhteys pelissä kehittymiseen
Tausta ja tavoitteet. Aikaisempi tutkimus on osoittanut videopelien vaikuttavan pelaajaan positiivisilla tavoilla, esimerkiksi kehittämällä pelaajan kognitiivisia kykyjä ja oppimistuloksia. Pelissä kehittymisen tutkimus edistää pelien positiivisten vaikutusten maksimointia niin kaupallisissa kuin opetus- ja hyötypeleissäkin. PIFF2¬¬-kehys on pelikokemuksen määrittelyn malli, johon pohjautuvilla kyselyillä voidaan mitata pelikokemuksen itsenäisiä osa-alueita ja täten pelikokemusta toistomittauksissa. Tämän tutkimuksen tavoitteena oli selvittää pelikokemuksen yhteyttä pelissä kehittymiseen. Pyrkimyksenä oli saada selville, mitkä pelikokemuksen tekijät ovat yhteydessä pelissä kehittymiseen sekä muodostaa hypoteeseja jatkotutkimukselle.
Menetelmät. Tutkimuksen data saatiin yhdeksältä koehenkilöltä, jotka pelasivat tutkimusta varten kehitettyä peliä kotioloissaan kahdeksan viikon ajan. Peli yhdistää aiemmin muissa tutkimuksissa käytettyjen kaupallisten pelien ominaisuuksia. Pelaajilta kerättiin dataa heidän suoriutumisestaan pelissä toiminta-, strategia- sekä pulmaelementeissä. Lisäksi koehenkilöiden tuli täyttää jokaisen pelin jälkeen PIFF2¬¬-kehykseen pohjautuva pelikokemuskysely, jolla mitattiin pelaajan pelikokemuksen piirteitä, kuten jatkamisen halua, läsnäolon tunnetta ja kontrollia. Pelisuoriutumisdatan analysoinnin helpottamiseksi tehtiin pääkomponenttianalyysi, jolla datan dimensioita saatiin vähennettyä. Pääkomponenttianalyysin tuloksena muodostettiin yksi pelitaitomuuttuja. Yhdistettyä pelitaitomuuttujaa käytettiin kokeen aikana arvioimaan pelaajien kehitystä, jota verrattiin pelikokemuskyselyn eri muuttujiin sekä näistä muodostettuun yhteiseen pelikokemusmuuttujaan.
Tulokset ja johtopäätökset. Pelikokemuksen ja pelissä kehittymisen vertailu osoitti, että jatkamisen halulla oli yhteys pelissä kehittymiseen. Yhdistetty pelitaitomuuttuja toimi hyvin mittaamaan pelaajan kehitystä tutkimuksessa käytetyssä pelissä. Johtopäätöksenä pelikokemuksella ja pelissä kehittymisellä on yhteys. Tutkimus osoittaa myös tarpeen pelikokemuksen ja pelissä kehittymisen yhteyden jatkotutkimukselle ja esittää jatkotutkimukselle hypoteesiksi jatkamisen halun olevan yhteydessä pelissä kehittymiseen
Coevolutionary Approaches to Generating Robust Build-Orders for Real-Time Strategy Games
We aim to find winning build-orders for Real-Time Strategy games. Real-Time Strategy games provide a variety of challenges, from short-term control to longer term planning. We focus on a longer-term planning problem; which units to build and in what order to produce the units so a player successfully defeats the opponent. Plans which address unit construction scheduling problems in Real-Time Strategy games are called build-orders. A robust build-order defeats many opponents, while a strong build-order defeats opponents quickly. However, no single build-order defeats all other build-orders, and build-orders that defeat many opponents may still lose against a specific opponent. Other researchers have only investigated generating build-orders that defeat a specific opponent, rather than finding robust, strong build-orders. Additionally, previous research has not applied coevolutionary algorithms towards generating build-orders. In contrast, our research has three main contributions towards finding robust, strong build-orders. First, we apply a coevolutionary algorithm towards finding robust build-orders. Compared to exhaustive search, a genetic algorithm finds the strongest build-orders while a coevolutionary algorithm finds more robust build-orders. Second, we show that case-injection enables coevolution to learn from specific opponents while maintaining robustness. Build-orders produced with coevolution and case-injection learn to defeat or play like the injected build-orders. Third, we show that coevolved build-orders benefit from a representation which includes branches and loops. Coevolution will utilize multiple branches and loops to create build-orders that are stronger than build-orders without loops and branches. We believe this work provides evidence that coevolutionary algorithms may be a viable approach to creating robust, strong build-orders for Real-Time Strategy games