13 research outputs found
A Unified View of Large-scale Zero-sum Equilibrium Computation
The task of computing approximate Nash equilibria in large zero-sum
extensive-form games has received a tremendous amount of attention due mainly
to the Annual Computer Poker Competition. Immediately after its inception, two
competing and seemingly different approaches emerged---one an application of
no-regret online learning, the other a sophisticated gradient method applied to
a convex-concave saddle-point formulation. Since then, both approaches have
grown in relative isolation with advancements on one side not effecting the
other. In this paper, we rectify this by dissecting and, in a sense, unify the
two views.Comment: AAAI Workshop on Computer Poker and Imperfect Informatio
Local and adaptive mirror descents in extensive-form games
We study how to learn -optimal strategies in zero-sum imperfect
information games (IIG) with trajectory feedback. In this setting, players
update their policies sequentially based on their observations over a fixed
number of episodes, denoted by . Existing procedures suffer from high
variance due to the use of importance sampling over sequences of actions
(Steinberger et al., 2020; McAleer et al., 2022). To reduce this variance, we
consider a fixed sampling approach, where players still update their policies
over time, but with observations obtained through a given fixed sampling
policy. Our approach is based on an adaptive Online Mirror Descent (OMD)
algorithm that applies OMD locally to each information set, using individually
decreasing learning rates and a regularized loss. We show that this approach
guarantees a convergence rate of with high
probability and has a near-optimal dependence on the game parameters when
applied with the best theoretical choices of learning rates and sampling
policies. To achieve these results, we generalize the notion of OMD
stabilization, allowing for time-varying regularization with convex increments
Online Sequential Decision-Making with Unknown Delays
In the field of online sequential decision-making, we address the problem
with delays utilizing the framework of online convex optimization (OCO), where
the feedback of a decision can arrive with an unknown delay. Unlike previous
research that is limited to Euclidean norm and gradient information, we propose
three families of delayed algorithms based on approximate solutions to handle
different types of received feedback. Our proposed algorithms are versatile and
applicable to universal norms. Specifically, we introduce a family of Follow
the Delayed Regularized Leader algorithms for feedback with full information on
the loss function, a family of Delayed Mirror Descent algorithms for feedback
with gradient information on the loss function and a family of Simplified
Delayed Mirror Descent algorithms for feedback with the value information of
the loss function's gradients at corresponding decision points. For each type
of algorithm, we provide corresponding regret bounds under cases of general
convexity and relative strong convexity, respectively. We also demonstrate the
efficiency of each algorithm under different norms through concrete examples.
Furthermore, our theoretical results are consistent with the current best
bounds when degenerated to standard settings
Efficient Last-iterate Convergence Algorithms in Solving Games
No-regret algorithms are popular for learning Nash equilibrium (NE) in
two-player zero-sum normal-form games (NFGs) and extensive-form games (EFGs).
Many recent works consider the last-iterate convergence no-regret algorithms.
Among them, the two most famous algorithms are Optimistic Gradient Descent
Ascent (OGDA) and Optimistic Multiplicative Weight Update (OMWU). However, OGDA
has high per-iteration complexity. OMWU exhibits a lower per-iteration
complexity but poorer empirical performance, and its convergence holds only
when NE is unique. Recent works propose a Reward Transformation (RT) framework
for MWU, which removes the uniqueness condition and achieves competitive
performance with OMWU. Unfortunately, RT-based algorithms perform worse than
OGDA under the same number of iterations, and their convergence guarantee is
based on the continuous-time feedback assumption, which does not hold in most
scenarios. To address these issues, we provide a closer analysis of the RT
framework, which holds for both continuous and discrete-time feedback. We
demonstrate that the essence of the RT framework is to transform the problem of
learning NE in the original game into a series of strongly convex-concave
optimization problems (SCCPs). We show that the bottleneck of RT-based
algorithms is the speed of solving SCCPs. To improve the their empirical
performance, we design a novel transformation method to enable the SCCPs can be
solved by Regret Matching+ (RM+), a no-regret algorithm with better empirical
performance, resulting in Reward Transformation RM+ (RTRM+). RTRM+ enjoys
last-iterate convergence under the discrete-time feedback setting. Using the
counterfactual regret decomposition framework, we propose Reward Transformation
CFR+ (RTCFR+) to extend RTRM+ to EFGs. Experimental results show that our
algorithms significantly outperform existing last-iterate convergence
algorithms and RM+ (CFR+)