Model-based offline reinforcement learning (RL), which builds a supervised
transition model with logging dataset to avoid costly interactions with the
online environment, has been a promising approach for offline policy
optimization. As the discrepancy between the logging data and online
environment may result in a distributional shift problem, many prior works have
studied how to build robust transition models conservatively and estimate the
model uncertainty accurately. However, the over-conservatism can limit the
exploration of the agent, and the uncertainty estimates may be unreliable. In
this work, we propose a novel Model-based Offline policy optimization framework
with Adversarial Network (MOAN). The key idea is to use adversarial learning to
build a transition model with better generalization, where an adversary is
introduced to distinguish between in-distribution and out-of-distribution
samples. Moreover, the adversary can naturally provide a quantification of the
model's uncertainty with theoretical guarantees. Extensive experiments showed
that our approach outperforms existing state-of-the-art baselines on widely
studied offline RL benchmarks. It can also generate diverse in-distribution
samples, and quantify the uncertainty more accurately.Comment: Accepted by 26th European Conference on Artificial Intelligence ECAI
202