Strategy games are a unique and interesting testbed for AI protocols due their complex
rules and large state and action spaces. Recent work in game AI has shown that
strong, robust AI agents can be created by combining existing techniques of deep
learning and heuristic search. Heuristic search techniques typically make use of
an evaluation function to judge the value of a game state, however these functions
have historically been hand-coded by game experts. Recent results have shown
that it is possible to use modern deep learning techniques to learn these evaluation
functions, bypassing the need for expert knowledge.
In this thesis, we explore the implementation of this idea in Prismata, an online
strategy game by Lunarch Studios. By generating game trace training data with
existing state-of-the-art AI agents, we are able to use a Machine Learning (ML)
approach to learn a new evaluation function. We trained several evaluation models
with various parameters in order to compare prediction time with prediction accuracy.
To evaluate the strength of our learned model, we ran a tournament between
AI players which differ only in their state evaluation strategy. The results of this
tournament demonstrate that our learned model when combined with the existing
Prismata Hierarchical Portfolio Search system, produces a new AI agent which is
able to defeat the previously strongest agents. A subset of the research presented
in this thesis was the subject of a publication in the Artificial Intelligence and Interactive
Digital Entertainment (AIIDE) 2019 Strategy Games Workshop [1]