In recent years, data-driven reinforcement learning (RL), also known as
offline RL, have gained significant attention. However, the role of data
sampling techniques in offline RL has been overlooked despite its potential to
enhance online RL performance. Recent research suggests applying sampling
techniques directly to state-transitions does not consistently improve
performance in offline RL. Therefore, in this study, we propose a memory
technique, (Prioritized) Trajectory Replay (TR/PTR), which extends the sampling
perspective to trajectories for more comprehensive information extraction from
limited data. TR enhances learning efficiency by backward sampling of
trajectories that optimizes the use of subsequent state information. Building
on TR, we build the weighted critic target to avoid sampling unseen actions in
offline training, and Prioritized Trajectory Replay (PTR) that enables more
efficient trajectory sampling, prioritized by various trajectory priority
metrics. We demonstrate the benefits of integrating TR and PTR with existing
offline RL algorithms on D4RL. In summary, our research emphasizes the
significance of trajectory-based data sampling techniques in enhancing the
efficiency and performance of offline RL algorithms