114 research outputs found

    A Bayesian Model for RTS Units Control applied to StarCraft

    Get PDF
    International audienceIn real-time strategy games (RTS), the player must reason about high-level strategy and planning while having effective tactics and even individual units micro-management. Enabling an artificial agent to deal with such a task entails breaking down the complexity of this environment. For that, we propose to control units locally in the Bayesian sensory motor robot fashion, with higher level orders integrated as perceptions. As complete inference encompassing global strategy down to individual unit needs is intractable, we embrace incompleteness through a hierarchical model able to deal with uncertainty. We developed and applied our approach on a StarCraft AI

    MSC: A Dataset for Macro-Management in StarCraft II

    Full text link
    Macro-management is an important problem in StarCraft, which has been studied for a long time. Various datasets together with assorted methods have been proposed in the last few years. But these datasets have some defects for boosting the academic and industrial research: 1) There're neither standard preprocessing, parsing and feature extraction procedures nor predefined training, validation and test set in some datasets. 2) Some datasets are only specified for certain tasks in macro-management. 3) Some datasets are either too small or don't have enough labeled data for modern machine learning algorithms such as deep neural networks. So most previous methods are trained with various features, evaluated on different test sets from the same or different datasets, making it difficult to be compared directly. To boost the research of macro-management in StarCraft, we release a new dataset MSC based on the platform SC2LE. MSC consists of well-designed feature vectors, pre-defined high-level actions and final result of each match. We also split MSC into training, validation and test set for the convenience of evaluation and comparison. Besides the dataset, we propose a baseline model and present initial baseline results for global state evaluation and build order prediction, which are two of the key tasks in macro-management. Various downstream tasks and analyses of the dataset are also described for the sake of research on macro-management in StarCraft II. Homepage: https://github.com/wuhuikai/MSC.Comment: Homepage: https://github.com/wuhuikai/MS

    ASPIRE Adaptive strategy prediction in a RTS environment

    Get PDF
    When playing a Real Time Strategy(RTS) game against the non-human player(bot) it is important that the bot can do different strategies to create a challenging experience over time. In this thesis we aim to improve the way the bot can predict what strategies the player is doing by analyzing the replays of the given players games. This way the bot can change its strategy based upon the known knowledge of the game state and what strategies the player have used before. We constructed a Bayesian Network to handle the predictions of the opponent's strategy and inserted that into a preexisting bot. Based on the results from our experiments we can state that the Bayesian Network adapted to the strategies our bot was exposed to. In addition we can see that the Bayesian Network only predicted the possible strategies given the obtained information about the game state.INFO390MASV-INF

    Learning macromanagement in starcraft from replays using deep learning

    Get PDF
    The real-time strategy game StarCraft has proven to be a challenging environment for artificial intelligence techniques, and as a result, current state-of-the-art solutions consist of numerous hand-crafted modules. In this paper, we show how macromanagement decisions in StarCraft can be learned directly from game replays using deep learning. Neural networks are trained on 789,571 state-action pairs extracted from 2,005 replays of highly skilled players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting the next build action. By integrating the trained network into UAlbertaBot, an open source StarCraft bot, the system can significantly outperform the game's built-in Terran bot, and play competitively against UAlbertaBot with a fixed rush strategy. To our knowledge, this is the first time macromanagement tasks are learned directly from replays in StarCraft. While the best hand-crafted strategies are still the state-of-the-art, the deep network approach is able to express a wide range of different strategies and thus improving the network's performance further with deep reinforcement learning is an immediately promising avenue for future research. Ultimately this approach could lead to strong StarCraft bots that are less reliant on hard-coded strategies.Comment: 8 pages, to appear in the proceedings of the IEEE Conference on Computational Intelligence and Games (CIG 2017
    corecore