In goal-conditioned reinforcement learning (GCRL), sparse rewards present
significant challenges, often obstructing efficient learning. Although
multi-step GCRL can boost this efficiency, it can also lead to off-policy
biases in target values. This paper dives deep into these biases, categorizing
them into two distinct categories: "shooting" and "shifting". Recognizing that
certain behavior policies can hasten policy refinement, we present solutions
designed to capitalize on the positive aspects of these biases while minimizing
their drawbacks, enabling the use of larger step sizes to speed up GCRL. An
empirical study demonstrates that our approach ensures a resilient and robust
improvement, even in ten-step learning scenarios, leading to superior learning
efficiency and performance that generally surpass the baseline and several
state-of-the-art multi-step GCRL benchmarks.Comment: 26 pages, 7 figure