Our work is a simple extension of the paper "Exploration by Random Network
Distillation". More in detail, we show how to efficiently combine Intrinsic
Rewards with Experience Replay in order to achieve more efficient and robust
exploration (with respect to PPO/RND) and consequently better results in terms
of agent performances and sample efficiency. We are able to do it by using a
new technique named Prioritized Oversampled Experience Replay (POER), that has
been built upon the definition of what is the important experience useful to
replay. Finally, we evaluate our technique on the famous Atari game Montezuma's
Revenge and some other hard exploration Atari games.Comment: 8 pages, 6 figures, accepted as full-paper at IEEE Conference on
Games (CoG) 201