73 research outputs found
Monte Carlo Tree Search with Heuristic Evaluations using Implicit Minimax Backups
Monte Carlo Tree Search (MCTS) has improved the performance of game engines
in domains such as Go, Hex, and general game playing. MCTS has been shown to
outperform classic alpha-beta search in games where good heuristic evaluations
are difficult to obtain. In recent years, combining ideas from traditional
minimax search in MCTS has been shown to be advantageous in some domains, such
as Lines of Action, Amazons, and Breakthrough. In this paper, we propose a new
way to use heuristic evaluations to guide the MCTS search by storing the two
sources of information, estimated win rates and heuristic evaluations,
separately. Rather than using the heuristic evaluations to replace the
playouts, our technique backs them up implicitly during the MCTS simulations.
These minimax values are then used to guide future simulations. We show that
using implicit minimax backups leads to stronger play performance in Kalah,
Breakthrough, and Lines of Action.Comment: 24 pages, 7 figures, 9 tables, expanded version of paper presented at
IEEE Conference on Computational Intelligence and Games (CIG) 2014 conferenc
No-Regret Learning in Extensive-Form Games with Imperfect Recall
Counterfactual Regret Minimization (CFR) is an efficient no-regret learning
algorithm for decision problems modeled as extensive games. CFR's regret bounds
depend on the requirement of perfect recall: players always remember
information that was revealed to them and the order in which it was revealed.
In games without perfect recall, however, CFR's guarantees do not apply. In
this paper, we present the first regret bound for CFR when applied to a general
class of games with imperfect recall. In addition, we show that CFR applied to
any abstraction belonging to our general class results in a regret bound not
just for the abstract game, but for the full game as well. We verify our theory
and show how imperfect recall can be used to trade a small increase in regret
for a significant reduction in memory in three domains: die-roll poker, phantom
tic-tac-toe, and Bluff.Comment: 21 pages, 4 figures, expanded version of article to appear in
Proceedings of the Twenty-Ninth International Conference on Machine Learnin
Variance Reduction in Monte Carlo Counterfactual Regret Minimization (VR-MCCFR) for Extensive Form Games using Baselines
Learning strategies for imperfect information games from samples of
interaction is a challenging problem. A common method for this setting, Monte
Carlo Counterfactual Regret Minimization (MCCFR), can have slow long-term
convergence rates due to high variance. In this paper, we introduce a variance
reduction technique (VR-MCCFR) that applies to any sampling variant of MCCFR.
Using this technique, per-iteration estimated values and updates are
reformulated as a function of sampled values and state-action baselines,
similar to their use in policy gradient reinforcement learning. The new
formulation allows estimates to be bootstrapped from other estimates within the
same episode, propagating the benefits of baselines along the sampled
trajectory; the estimates remain unbiased even when bootstrapping from other
estimates. Finally, we show that given a perfect baseline, the variance of the
value estimates can be reduced to zero. Experimental evaluation shows that
VR-MCCFR brings an order of magnitude speedup, while the empirical variance
decreases by three orders of magnitude. The decreased variance allows for the
first time CFR+ to be used with sampling, increasing the speedup to two orders
of magnitude
- …