414 research outputs found
On Efficient Reinforcement Learning for Full-length Game of StarCraft II
StarCraft II (SC2) poses a grand challenge for reinforcement learning (RL),
of which the main difficulties include huge state space, varying action space,
and a long time horizon. In this work, we investigate a set of RL techniques
for the full-length game of StarCraft II. We investigate a hierarchical RL
approach involving extracted macro-actions and a hierarchical architecture of
neural networks. We investigate a curriculum transfer training procedure and
train the agent on a single machine with 4 GPUs and 48 CPU threads. On a 64x64
map and using restrictive units, we achieve a win rate of 99% against the
level-1 built-in AI. Through the curriculum transfer learning algorithm and a
mixture of combat models, we achieve a 93% win rate against the most difficult
non-cheating level built-in AI (level-7). In this extended version of the
paper, we improve our architecture to train the agent against the cheating
level AIs and achieve the win rate against the level-8, level-9, and level-10
AIs as 96%, 97%, and 94%, respectively. Our codes are at
https://github.com/liuruoze/HierNet-SC2. To provide a baseline referring the
AlphaStar for our work as well as the research and open-source community, we
reproduce a scaled-down version of it, mini-AlphaStar (mAS). The latest version
of mAS is 1.07, which can be trained on the raw action space which has 564
actions. It is designed to run training on a single common machine, by making
the hyper-parameters adjustable. We then compare our work with mAS using the
same resources and show that our method is more effective. The codes of
mini-AlphaStar are at https://github.com/liuruoze/mini-AlphaStar. We hope our
study could shed some light on the future research of efficient reinforcement
learning on SC2 and other large-scale games.Comment: 48 pages,21 figure
Methods of multi-agent movement control and coordination of groups of mobile units in a real-time strategy games
Tato práce nabĂzĂ metodu pro reaktivnĂ Ĺ™ĂzenĂ jednotek v real-time strategickĂ© (RTS) poÄŤitaÄŤovĂ© hĹ™e pomocĂ multi-agentnĂch potenciálovĂ˝ch polĂ. Klasická RTS hra StarCraft: Broodwar byla vybrána jako testovacĂ platforma dĂky jejĂmu postavenĂ na konkurenÄŤnĂ scĂ©nÄ› umÄ›lĂ© inteligence (UI). NabĂzená umÄ›lá inteligence ovládá svĂ© jednotky pomocĂ umĂstÄ›nĂ rĹŻznĂ˝ch potenciálovĂ˝ch polĂ na objekty a na mĂsta v hernĂm svÄ›tÄ›. Snahou tĂ©to práce je vylepšit pĹ™edchozĂ metody vyuĹľĂvajicĂ potenciálová pole.This thesis proposes an approach to Reactive Control in Real-Time Strategy (RTS) computer games using Multi-Agent Potential Fields. The classic RTS title StarCraft: Brooodwar has been chosen as testing platform due to its status in the competitive Artificial Intelligence (AI) scene. The proposed AI controls its units by placing different types of potential fields in objects and places around the game world. This work is an attempt to improve previous methods done with Potential Field in RTS
A Multi-Objective Approach to Tactical Maneuvering Within Real Time Strategy Games
The real time strategy (RTS) environment is a strong platform for simulating complex tactical problems. The overall research goal is to develop artificial intelligence (AI) RTS planning agents for military critical decision making education. These agents should have the ability to perform at an expert level as well as to assess a players critical decision-making ability or skill-level. The nature of the time sensitivity within the RTS environment creates very complex situations. Each situation must be analyzed and orders must be given to each tactical unit before the scenario on the battlefield changes and makes the decisions no longer relevant. This particular research effort of RTS AI development focuses on constructing a unique approach for tactical unit positioning within an RTS environment. By utilizing multiobjective evolutionary algorithms (MOEAs) for finding an \optimal positioning solution, an AI agent can quickly determine an effective unit positioning solution with a fast, rapid response. The development of such an RTS AI agent goes through three distinctive phases. The first of which is mathematically describing the problem space of the tactical positioning of units within a combat scenario. Such a definition allows for the development of a generic MOEA search algorithm that is applicable to nearly every scenario. The next major phase requires the development and integration of this algorithm into the Air Force Institute of Technology RTS AI agent. Finally, the last phase involves experimenting with the positioning agent in order to determine the effectiveness and efficiency when placed against various other tactical options. Experimental results validate that controlling the position of the units within a tactical situation is an effective alternative for an RTS AI agent to win a battle
Why Video Game Genres Fail: A Classificatory Analysis
This paper explores the current affordances and limitations of video game genre from a library and information science perspective with an emphasis on classification theory. We identify and discuss various purposes of genre relating to video games, including identity, collocation and retrieval, commercial marketing, and educational instruction. Through the use of examples, we discuss the ways in which these purposes are supported by genre classification and conceptualization, and the implications for video games. Suggestions for improved conceptualizations such as family resemblances, prototype theory, faceted classification, and appeal factors for video game genres are considered, with discussions of strengths and weaknesses. This analysis helps inform potential future practical applications for describing video games at cultural heritage institutions such as libraries, museums, and archives, as well as furthering the understanding of video game genre and genre classification for game studies at large
Player Behavior Modeling In Video Games
Player Behavior Modeling in Video Games In this research, we study players’ interactions in video games to understand player behavior. The first part of the research concerns predicting the winner of a game, which we apply to StarCraft and Destiny. We manage to build models for these games which have reasonable to high accuracy. We also investigate which features of a game comprise strong predictors, which are economic features and micro commands for StarCraft, and key shooter performance metrics for Destiny, though features differ between different match types. The second part of the research concerns distinguishing playing styles of players of StarCraft and Destiny. We find that we can indeed recognize different styles of playing in these games, related to different match types. We relate these different playing styles to chance of winning, but find that there are no significant differences between the effects of different playing styles on winning. However, they do have an effect on the length of matches. In Destiny, we also investigate what player types are distinguished when we use Archetype Analysis on playing style features related to change in performance, and find that the archetypes correspond to different ways of learning. In the final part of the research, we investigate to what extent playing styles are related to different demographics, in particular to national cultures. We investigate this for four popular Massively multiplayer online games, namely Battlefield 4, Counter-Strike, Dota 2, and Destiny. We found that playing styles have relationship with nationality and cultural dimensions, and that there are clear similarities between the playing styles of similar cultures. In particular, the Hofstede dimension Individualism explained most of the variance in playing styles between national cultures for the games that we examined
Goal Reasoning: Papers from the ACS Workshop
This technical report contains the 14 accepted papers presented at the Workshop on Goal Reasoning,
which was held as part of the 2015 Conference on Advances in Cognitive Systems (ACS-15) in Atlanta,
Georgia on 28 May 2015. This is the fourth in a series of workshops related to this topic, the first of
which was the AAAI-10 Workshop on Goal-Directed Autonomy; the second was the Self-Motivated
Agents (SeMoA) Workshop, held at Lehigh University in November 2012; and the third was the Goal
Reasoning Workshop at ACS-13 in Baltimore, Maryland in December 2013
- …