553 research outputs found
Learning macromanagement in starcraft from replays using deep learning
The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies.Comment: 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017
- …