Many potential applications of reinforcement learning (RL) require guarantees
that the agent will perform well in the face of disturbances to the dynamics or
reward function. In this paper, we prove theoretically that standard maximum
entropy RL is robust to some disturbances in the dynamics and the reward
function. While this capability of MaxEnt RL has been observed empirically in
prior work, to the best of our knowledge our work provides the first rigorous
proof and theoretical characterization of the MaxEnt RL robust set. While a
number of prior robust RL algorithms have been designed to handle similar
disturbances to the reward function or dynamics, these methods typically
require adding additional moving parts and hyperparameters on top of a base RL
algorithm. In contrast, our theoretical results suggest that MaxEnt RL by
itself is robust to certain disturbances, without requiring any additional
modifications. While this does not imply that MaxEnt RL is the best available
robust RL method, MaxEnt RL does possess a striking simplicity and appealing
formal guarantees.Comment: Blog post and videos:
https://bair.berkeley.edu/blog/2021/03/10/maxent-robust-rl/. arXiv admin
note: text overlap with arXiv:1910.0191