Deep reinforcement learning (RL) has shown remarkable success in specific
offline decision-making scenarios, yet its theoretical guarantees are still
under development. Existing works on offline RL theory primarily emphasize a
few trivial settings, such as linear MDP or general function approximation with
strong assumptions and independent data, which lack guidance for practical use.
The coupling of deep learning and Bellman residuals makes this problem
challenging, in addition to the difficulty of data dependence. In this paper,
we establish a non-asymptotic estimation error of pessimistic offline RL using
general neural network approximation with C-mixing data regarding
the structure of networks, the dimension of datasets, and the concentrability
of data coverage, under mild assumptions. Our result shows that the estimation
error consists of two parts: the first converges to zero at a desired rate on
the sample size with partially controllable concentrability, and the second
becomes negligible if the residual constraint is tight. This result
demonstrates the explicit efficiency of deep adversarial offline RL frameworks.
We utilize the empirical process tool for C-mixing sequences and
the neural network approximation theory for the H\"{o}lder class to achieve
this. We also develop methods to bound the Bellman estimation error caused by
function approximation with empirical Bellman constraint perturbations.
Additionally, we present a result that lessens the curse of dimensionality
using data with low intrinsic dimensionality and function classes with low
complexity. Our estimation provides valuable insights into the development of
deep offline RL and guidance for algorithm model design.Comment: Full version of the paper accepted to the 38th Annual AAAI Conference
on Artificial Intelligence (AAAI 2024