In recent years, by leveraging more data, computation, and diverse tasks,
learned optimizers have achieved remarkable success in supervised learning,
outperforming classical hand-designed optimizers. Reinforcement learning (RL)
is essentially different from supervised learning, and in practice, these
learned optimizers do not work well even in simple RL tasks. We investigate
this phenomenon and identify two issues. First, the agent-gradient distribution
is non-independent and identically distributed, leading to inefficient
meta-training. Moreover, due to highly stochastic agent-environment
interactions, the agent-gradients have high bias and variance, which increases
the difficulty of learning an optimizer for RL. We propose pipeline training
and a novel optimizer structure with a good inductive bias to address these
issues, making it possible to learn an optimizer for reinforcement learning
from scratch. We show that, although only trained in toy tasks, our learned
optimizer can generalize to unseen complex tasks in Brax.Comment: Published at RLC 2024. For code release, see
https://github.com/sail-sg/optim4r