5,117 research outputs found

    Non-Linear Monte-Carlo Search in Civilization II

    Get PDF
    This paper presents a new Monte-Carlo search algorithm for very large sequential decision-making problems. Our approach builds on the recent success of Monte-Carlo tree search algorithms, which estimate the value of states and actions from the mean outcome of random simulations. Instead of using a search tree, we apply non-linear regression, online, to estimate a state-action value function from the outcomes of random simulations. This value function generalizes between related states and actions, and can therefore provide more accurate evaluations after fewer simulations. We apply our Monte-Carlo search algorithm to the game of Civilization II, a challenging multi-agent strategy game with an enormous state space and around 102110^{21} joint actions. We approximate the value function by a neural network, augmented by linguistic knowledge that is extracted automatically from the official game manual. We show that this non-linear value function is significantly more efficient than a linear value function. Our non-linear Monte-Carlo search wins 80\% of games against the handcrafted, built-in AI for Civilization II.National Science Foundation (U.S.) (CAREER grant IIS-0448168)National Science Foundation (U.S.) (grant IIS-0835652)United States. Defense Advanced Research Projects Agency (DARPA Machine Reading Program (FA8750-09-C-0172))Microsoft Research (New Faculty Fellowship

    Learning to win by reading manuals in a Monte-Carlo framework

    Get PDF
    This paper presents a novel approach for leveraging automatically extracted textual knowledge to improve the performance of control applications such as games. Our ultimate goal is to enrich a stochastic player with high-level guidance expressed in text. Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the built-in AI of Civilization II.National Science Foundation (U.S.) (CAREER grant IIS-0448168)National Science Foundation (U.S.) (CAREER grant IIS-0835652)United States. Defense Advanced Research Projects Agency (DARPA Machine Reading Program (FA8750-09- C-0172))Microsoft Research (New Faculty Fellowship

    Enhancing automated red teaming with Monte Carlo Tree Search

    Get PDF
    This study has investigated novel Automated Red Teaming methods that support replanning. Traditional Automated Red Teaming (ART) approaches usually use evolutionary computing methods for evolving plans using simulations. A drawback of this method is the inability to change a team’s strategy part way through a simulation. This study focussed on a Monte-Carlo Tree Search (MCTS) method in an ART environment that supports re-planning to lead to better strategy decisions and a higher average scor
    • …
    corecore