3,124 research outputs found
Online Convex Optimization for Sequential Decision Processes and Extensive-Form Games
Regret minimization is a powerful tool for solving large-scale extensive-form
games. State-of-the-art methods rely on minimizing regret locally at each
decision point. In this work we derive a new framework for regret minimization
on sequential decision problems and extensive-form games with general compact
convex sets at each decision point and general convex losses, as opposed to
prior work which has been for simplex decision points and linear losses. We
call our framework laminar regret decomposition. It generalizes the CFR
algorithm to this more general setting. Furthermore, our framework enables a
new proof of CFR even in the known setting, which is derived from a perspective
of decomposing polytope regret, thereby leading to an arguably simpler
interpretation of the algorithm. Our generalization to convex compact sets and
convex losses allows us to develop new algorithms for several problems:
regularized sequential decision making, regularized Nash equilibria in
extensive-form games, and computing approximate extensive-form perfect
equilibria. Our generalization also leads to the first regret-minimization
algorithm for computing reduced-normal-form quantal response equilibria based
on minimizing local regrets. Experiments show that our framework leads to
algorithms that scale at a rate comparable to the fastest variants of
counterfactual regret minimization for computing Nash equilibrium, and
therefore our approach leads to the first algorithm for computing quantal
response equilibria in extremely large games. Finally we show that our
framework enables a new kind of scalable opponent exploitation approach
Simple Regret Optimization in Online Planning for Markov Decision Processes
We consider online planning in Markov decision processes (MDPs). In online
planning, the agent focuses on its current state only, deliberates about the
set of possible policies from that state onwards and, when interrupted, uses
the outcome of that exploratory deliberation to choose what action to perform
next. The performance of algorithms for online planning is assessed in terms of
simple regret, which is the agent's expected performance loss when the chosen
action, rather than an optimal one, is followed.
To date, state-of-the-art algorithms for online planning in general MDPs are
either best effort, or guarantee only polynomial-rate reduction of simple
regret over time. Here we introduce a new Monte-Carlo tree search algorithm,
BRUE, that guarantees exponential-rate reduction of simple regret and error
probability. This algorithm is based on a simple yet non-standard state-space
sampling scheme, MCTS2e, in which different parts of each sample are dedicated
to different exploratory objectives. Our empirical evaluation shows that BRUE
not only provides superior performance guarantees, but is also very effective
in practice and favorably compares to state-of-the-art. We then extend BRUE
with a variant of "learning by forgetting." The resulting set of algorithms,
BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper
bound on its reduction rate, and exhibits even more attractive empirical
performance
- …