We revisit the domain of off-policy policy optimization in RL from the
perspective of coordinate ascent. One commonly-used approach is to leverage the
off-policy policy gradient to optimize a surrogate objective -- the total
discounted in expectation return of the target policy with respect to the state
distribution of the behavior policy. However, this approach has been shown to
suffer from the distribution mismatch issue, and therefore significant efforts
are needed for correcting this mismatch either via state distribution
correction or a counterfactual method. In this paper, we rethink off-policy
learning via Coordinate Ascent Policy Optimization (CAPO), an off-policy
actor-critic algorithm that decouples policy improvement from the state
distribution of the behavior policy without using the policy gradient. This
design obviates the need for distribution correction or importance sampling in
the policy improvement step of off-policy policy gradient. We establish the
global convergence of CAPO with general coordinate selection and then further
quantify the convergence rates of several instances of CAPO with popular
coordinate selection rules, including the cyclic and the randomized variants of
CAPO. We then extend CAPO to neural policies for a more practical
implementation. Through experiments, we demonstrate that CAPO provides a
competitive approach to RL in practice.Comment: 47 pages, 4 figure