In this paper, we study the problem of (finite horizon tabular) Markov
decision processes (MDPs) with heavy-tailed rewards under the constraint of
differential privacy (DP). Compared with the previous studies for private
reinforcement learning that typically assume rewards are sampled from some
bounded or sub-Gaussian distributions to ensure DP, we consider the setting
where reward distributions have only finite (1+v)-th moments with some vβ(0,1]. By resorting to robust mean estimators for rewards, we first propose
two frameworks for heavy-tailed MDPs, i.e., one is for value iteration and
another is for policy optimization. Under each framework, we consider both
joint differential privacy (JDP) and local differential privacy (LDP) models.
Based on our frameworks, we provide regret upper bounds for both JDP and LDP
cases and show that the moment of distribution and privacy budget both have
significant impacts on regrets. Finally, we establish a lower bound of regret
minimization for heavy-tailed MDPs in JDP model by reducing it to the
instance-independent lower bound of heavy-tailed multi-armed bandits in DP
model. We also show the lower bound for the problem in LDP by adopting some
private minimax methods. Our results reveal that there are fundamental
differences between the problem of private RL with sub-Gaussian and that with
heavy-tailed rewards.Comment: ICML 2023. arXiv admin note: text overlap with arXiv:2009.09052 by
other author