218 research outputs found

    MSC: A Dataset for Macro-Management in StarCraft II

    Full text link
    Macro-management is an important problem in StarCraft, which has been studied for a long time. Various datasets together with assorted methods have been proposed in the last few years. But these datasets have some defects for boosting the academic and industrial research: 1) There're neither standard preprocessing, parsing and feature extraction procedures nor predefined training, validation and test set in some datasets. 2) Some datasets are only specified for certain tasks in macro-management. 3) Some datasets are either too small or don't have enough labeled data for modern machine learning algorithms such as deep neural networks. So most previous methods are trained with various features, evaluated on different test sets from the same or different datasets, making it difficult to be compared directly. To boost the research of macro-management in StarCraft, we release a new dataset MSC based on the platform SC2LE. MSC consists of well-designed feature vectors, pre-defined high-level actions and final result of each match. We also split MSC into training, validation and test set for the convenience of evaluation and comparison. Besides the dataset, we propose a baseline model and present initial baseline results for global state evaluation and build order prediction, which are two of the key tasks in macro-management. Various downstream tasks and analyses of the dataset are also described for the sake of research on macro-management in StarCraft II. Homepage: https://github.com/wuhuikai/MSC.Comment: Homepage: https://github.com/wuhuikai/MS

    Player Behavior Modeling In Video Games

    Get PDF
    Player Behavior Modeling in Video Games In this research, we study players’ interactions in video games to understand player behavior. The first part of the research concerns predicting the winner of a game, which we apply to StarCraft and Destiny. We manage to build models for these games which have reasonable to high accuracy. We also investigate which features of a game comprise strong predictors, which are economic features and micro commands for StarCraft, and key shooter performance metrics for Destiny, though features differ between different match types. The second part of the research concerns distinguishing playing styles of players of StarCraft and Destiny. We find that we can indeed recognize different styles of playing in these games, related to different match types. We relate these different playing styles to chance of winning, but find that there are no significant differences between the effects of different playing styles on winning. However, they do have an effect on the length of matches. In Destiny, we also investigate what player types are distinguished when we use Archetype Analysis on playing style features related to change in performance, and find that the archetypes correspond to different ways of learning. In the final part of the research, we investigate to what extent playing styles are related to different demographics, in particular to national cultures. We investigate this for four popular Massively multiplayer online games, namely Battlefield 4, Counter-Strike, Dota 2, and Destiny. We found that playing styles have relationship with nationality and cultural dimensions, and that there are clear similarities between the playing styles of similar cultures. In particular, the Hofstede dimension Individualism explained most of the variance in playing styles between national cultures for the games that we examined

    Deep learning for video game playing

    Get PDF
    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards

    On Efficient Reinforcement Learning for Full-length Game of StarCraft II

    Full text link
    StarCraft II (SC2) poses a grand challenge for reinforcement learning (RL), of which the main difficulties include huge state space, varying action space, and a long time horizon. In this work, we investigate a set of RL techniques for the full-length game of StarCraft II. We investigate a hierarchical RL approach involving extracted macro-actions and a hierarchical architecture of neural networks. We investigate a curriculum transfer training procedure and train the agent on a single machine with 4 GPUs and 48 CPU threads. On a 64x64 map and using restrictive units, we achieve a win rate of 99% against the level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat models, we achieve a 93% win rate against the most difficult non-cheating level built-in AI (level-7). In this extended version of the paper, we improve our architecture to train the agent against the cheating level AIs and achieve the win rate against the level-8, level-9, and level-10 AIs as 96%, 97%, and 94%, respectively. Our codes are at https://github.com/liuruoze/HierNet-SC2. To provide a baseline referring the AlphaStar for our work as well as the research and open-source community, we reproduce a scaled-down version of it, mini-AlphaStar (mAS). The latest version of mAS is 1.07, which can be trained on the raw action space which has 564 actions. It is designed to run training on a single common machine, by making the hyper-parameters adjustable. We then compare our work with mAS using the same resources and show that our method is more effective. The codes of mini-AlphaStar are at https://github.com/liuruoze/mini-AlphaStar. We hope our study could shed some light on the future research of efficient reinforcement learning on SC2 and other large-scale games.Comment: 48 pages,21 figure

    ASPIRE Adaptive strategy prediction in a RTS environment

    Get PDF
    When playing a Real Time Strategy(RTS) game against the non-human player(bot) it is important that the bot can do different strategies to create a challenging experience over time. In this thesis we aim to improve the way the bot can predict what strategies the player is doing by analyzing the replays of the given players games. This way the bot can change its strategy based upon the known knowledge of the game state and what strategies the player have used before. We constructed a Bayesian Network to handle the predictions of the opponent's strategy and inserted that into a preexisting bot. Based on the results from our experiments we can state that the Bayesian Network adapted to the strategies our bot was exposed to. In addition we can see that the Bayesian Network only predicted the possible strategies given the obtained information about the game state.INFO390MASV-INF
    • …
    corecore