1 research outputs found
Prioritized offline Goal-swapping Experience Replay
In goal-conditioned offline reinforcement learning, an agent learns from
previously collected data to go to an arbitrary goal. Since the offline data
only contains a finite number of trajectories, a main challenge is how to
generate more data. Goal-swapping generates additional data by switching
trajectory goals but while doing so produces a large number of invalid
trajectories. To address this issue, we propose prioritized goal-swapping
experience replay (PGSER). PGSER uses a pre-trained Q function to assign higher
priority weights to goal swapped transitions that allow reaching the goal. In
experiments, PGSER significantly improves over baselines in a wide range of
benchmark tasks, including challenging previously unsuccessful dexterous
in-hand manipulation tasks