285 research outputs found
Macro action selection with deep reinforcement learning in StarCraft
StarCraft (SC) is one of the most popular and successful Real Time Strategy
(RTS) games. In recent years, SC is also widely accepted as a challenging
testbed for AI research because of its enormous state space, partially observed
information, multi-agent collaboration, and so on. With the help of annual
AIIDE and CIG competitions, a growing number of SC bots are proposed and
continuously improved. However, a large gap remains between the top-level bot
and the professional human player. One vital reason is that current SC bots
mainly rely on predefined rules to select macro actions during their games.
These rules are not scalable and efficient enough to cope with the enormous yet
partially observed state space in the game. In this paper, we propose a deep
reinforcement learning (DRL) framework to improve the selection of macro
actions. Our framework is based on the combination of the Ape-X DQN and the
Long-Short-Term-Memory (LSTM). We use this framework to build our bot, named as
LastOrder. Our evaluation, based on training against all bots from the AIIDE
2017 StarCraft AI competition set, shows that LastOrder achieves an 83% winning
rate, outperforming 26 bots in total 28 entrants
Macro action selection with deep reinforcement learning in StarCraft
StarCraft (SC) is one of the most popular and successful Real Time Strategy
(RTS) games. In recent years, SC is also considered as a testbed for AI
research, due to its enormous state space, hidden information, multi-agent
collaboration and so on. Thanks to the annual AIIDE and CIG competitions, a
growing number of bots are proposed and being continuously improved. However, a
big gap still remains between the top bot and the professional human players.
One vital reason is that current bots mainly rely on predefined rules to
perform macro actions. These rules are not scalable and efficient enough to
cope with the large but partially observed macro state space in SC. In this
paper, we propose a DRL based framework to do macro action selection. Our
framework combines the reinforcement learning approach Ape-X DQN with
Long-Short-Term-Memory (LSTM) to improve the macro action selection in bot. We
evaluate our bot, named as LastOrder, on the AIIDE 2017 StarCraft AI
competition bots set. Our bot achieves overall 83% win-rate, outperforming 26
bots in total 28 entrants
Recommended from our members
Towards Informed Exploration for Deep Reinforcement Learning
In this thesis, we discuss various techniques for improving exploration for deep reinforcement learning. We begin with a brief review of reinforcement learning (RL) and the fundamental v.s. exploitation trade-off. Then we review how deep RL has improved upon classical and summarize six categories of the latest exploration methods for deep RL, in the order increasing usage of prior information. We then explore representative works in three categories discuss their strengths and weaknesses. The first category, represented by Soft Q-learning, uses regularization to encourage exploration. The second category, represented by count-based via hashing, maps states to hash codes for counting and assigns higher exploration to less-encountered states. The third category utilizes hierarchy and is represented by modular architecture for RL agents to play StarCraft II. Finally, we conclude that exploration by prior knowledge is a promising research direction and suggest topics of potentially impact
Deep learning for video game playing
In this article, we review recent Deep Learning advances in the context of
how they have been applied to play different types of video games such as
first-person shooters, arcade games, and real-time strategy games. We analyze
the unique requirements that different game genres pose to a deep learning
system and highlight important open challenges in the context of applying these
machine learning methods to video games, such as general game playing, dealing
with extremely large decision spaces and sparse rewards
Player Behavior Modeling In Video Games
Player Behavior Modeling in Video Games In this research, we study players’ interactions in video games to understand player behavior. The first part of the research concerns predicting the winner of a game, which we apply to StarCraft and Destiny. We manage to build models for these games which have reasonable to high accuracy. We also investigate which features of a game comprise strong predictors, which are economic features and micro commands for StarCraft, and key shooter performance metrics for Destiny, though features differ between different match types. The second part of the research concerns distinguishing playing styles of players of StarCraft and Destiny. We find that we can indeed recognize different styles of playing in these games, related to different match types. We relate these different playing styles to chance of winning, but find that there are no significant differences between the effects of different playing styles on winning. However, they do have an effect on the length of matches. In Destiny, we also investigate what player types are distinguished when we use Archetype Analysis on playing style features related to change in performance, and find that the archetypes correspond to different ways of learning. In the final part of the research, we investigate to what extent playing styles are related to different demographics, in particular to national cultures. We investigate this for four popular Massively multiplayer online games, namely Battlefield 4, Counter-Strike, Dota 2, and Destiny. We found that playing styles have relationship with nationality and cultural dimensions, and that there are clear similarities between the playing styles of similar cultures. In particular, the Hofstede dimension Individualism explained most of the variance in playing styles between national cultures for the games that we examined
Redes neuronales que expresan múltiples estrategias en el videojuego StarCraft 2.
ilustracionesUsing neural networks and supervised learning, we have created models capable of solving problems at a superhuman level. Nevertheless, this training process results in models that learn policies that average the plethora of behaviors usually found in datasets. In this thesis we present and study the Behavioral Repetoires Imitation Learning (BRIL) technique. In BRIL, the user designs a behavior space, the user then projects this behavior space into low coordinates and uses these coordinates as input to the model. Upon deployment, the user can adjust the model to express a behavior by specifying fixed coordinates for these inputs. The main research question ponders on the relationship between the Dimension Reduction algorithm and how much the trained models are able to replicate behaviors. We study three different Dimensionality Reduction algorithms: Principal Component Analysis (PCA), Isometric Feature Mapping (Isomap) and Uniform Manifold Approximation and Projection (UMAP); we design and embed a behavior space in the video game StarCraft 2, we train different models for each embedding and we test the ability of each model to express multiple strategies. Results show that with BRIL we are able to train models that are able to express the multiple behaviors present in the dataset. The geometric structure these methods preserve induce different separations of behaviors, and these separations are reflected in the models' conducts. (Tomado de la fuente)Usando redes neuronales y aprendizaje supervisado, hemos creado modelos capaces de solucionar problemas a nivel súperhumano. Sin embargo, el proceso de entrenamiento de estos modelos es tal que el resultado es una polÃtica que promedia todos los diferentes comportamientos presentes en el conjunto de datos. En esta tesis presentamos y estudiamos la técnica Aprendizaje por Imitación de Repertorios de Comportamiento (BRIL), la cual permite entrenar modelos que expresan múltiples comportamientos de forma ajustable. En BRIL, el usuario diseña un espacio de comportamientos, lo proyecta a bajas dimensiones y usa las coordenadas resultantes como entradas del modelo. Para poder expresar cierto comportamiento a la hora de desplegar la red, basta con fijar estas entradas a las coordenadas del respectivo comportamiento. La pregunta principal que investigamos es la relación entre el algoritmo de reducción de dimensionalidad y la capacidad de los modelos entrenados para replicar y expresar las estrategias representadas. Estudiamos tres algoritmos diferentes de reducción de dimensionalidad: Análisis de Componentes Principales (PCA), Mapeo de CaracterÃsticas Isométrico (Isomap) y Aproximación y Proyección de Manifolds Uniformes (UMAP); diseñamos y proyectamos un espacio de comportamientos en el videojuego StarCraft 2, entrenamos diferentes modelos para cada embebimiento y probamos la capacidad de cada modelo de expresar múltiples estrategias. Los resultados muestran que, usando BRIL, logramos entrenar modelos que pueden expresar los múltiples comportamientos presentes en el conjunto de datos. La estructura geométrica preservada por cada método de reducción induce diferentes separaciones de los comportamientos, y estas separaciones se ven reflejadas en las conductas de los modelos. (Tomado de la fuente)MaestrÃ
- …