In-context learning is a promising approach for online policy learning of
offline reinforcement learning (RL) methods, which can be achieved at inference
time without gradient optimization. However, this method is hindered by
significant computational costs resulting from the gathering of large training
trajectory sets and the need to train large Transformer models. We address this
challenge by introducing an In-context Exploration-Exploitation (ICEE)
algorithm, designed to optimize the efficiency of in-context policy learning.
Unlike existing models, ICEE performs an exploration-exploitation trade-off at
inference time within a Transformer model, without the need for explicit
Bayesian inference. Consequently, ICEE can solve Bayesian optimization problems
as efficiently as Gaussian process biased methods do, but in significantly less
time. Through experiments in grid world environments, we demonstrate that ICEE
can learn to solve new RL tasks using only tens of episodes, marking a
substantial improvement over the hundreds of episodes needed by the previous
in-context learning method.Comment: Published at ICLR 202