422 research outputs found

    Simple Regret Optimization in Online Planning for Markov Decision Processes

    Full text link
    We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. The performance of algorithms for online planning is assessed in terms of simple regret, which is the agent's expected performance loss when the chosen action, rather than an optimal one, is followed. To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential-rate reduction of simple regret and error probability. This algorithm is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. Our empirical evaluation shows that BRUE not only provides superior performance guarantees, but is also very effective in practice and favorably compares to state-of-the-art. We then extend BRUE with a variant of "learning by forgetting." The resulting set of algorithms, BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper bound on its reduction rate, and exhibits even more attractive empirical performance

    A Survey of Monte Carlo Tree Search Methods

    Get PDF
    Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

    Thompson sampling based Monte-Carlo planning in POMDPs

    No full text
    Monte-Carlo tree search (MCTS) has been drawinggreat interest in recent years for planning under uncertainty. One of the key challenges is the tradeoffbetween exploration and exploitation. To addressthis, we introduce a novel online planning algorithmfor large POMDPs using Thompson sampling basedMCTS that balances between cumulative and simple regrets.The proposed algorithm — Dirichlet-Dirichlet-NormalGamma based Partially Observable Monte-Carlo Planning (D2NG-POMCP) — treats the accumulatedreward of performing an action from a beliefstate in the MCTS search tree as a random variable followingan unknown distribution with hidden parameters.Bayesian method is used to model and infer theposterior distribution of these parameters by choosingthe conjugate prior in the form of a combination of twoDirichlet and one NormalGamma distributions. Thompsonsampling is exploited to guide the action selection inthe search tree. Experimental results confirmed that ouralgorithm outperforms the state-of-the-art approacheson several common benchmark problems
    corecore