Extensive utilization of deep reinforcement learning (DRL) policy networks in
diverse continuous control tasks has raised questions regarding performance
degradation in expansive state spaces where the input state norm is larger than
that in the training environment. This paper aims to uncover the underlying
factors contributing to such performance deterioration when dealing with
expanded state spaces, using a novel analysis technique known as state
division. In contrast to prior approaches that employ state division merely as
a post-hoc explanatory tool, our methodology delves into the intrinsic
characteristics of DRL policy networks. Specifically, we demonstrate that the
expansion of state space induces the activation function tanh to exhibit
saturability, resulting in the transformation of the state division boundary
from nonlinear to linear. Our analysis centers on the paradigm of the
double-integrator system, revealing that this gradual shift towards linearity
imparts a control behavior reminiscent of bang-bang control. However, the
inherent linearity of the division boundary prevents the attainment of an ideal
bang-bang control, thereby introducing unavoidable overshooting. Our
experimental investigations, employing diverse RL algorithms, establish that
this performance phenomenon stems from inherent attributes of the DRL policy
network, remaining consistent across various optimization algorithms