688 research outputs found
On the Importance of Feature Decorrelation for Unsupervised Representation Learning in Reinforcement Learning
Recently, unsupervised representation learning (URL) has improved the sample
efficiency of Reinforcement Learning (RL) by pretraining a model from a large
unlabeled dataset. The underlying principle of these methods is to learn
temporally predictive representations by predicting future states in the latent
space. However, an important challenge of this approach is the representational
collapse, where the subspace of the latent representations collapses into a
low-dimensional manifold. To address this issue, we propose a novel URL
framework that causally predicts future states while increasing the dimension
of the latent manifold by decorrelating the features in the latent space.
Through extensive empirical studies, we demonstrate that our framework
effectively learns predictive representations without collapse, which
significantly improves the sample efficiency of state-of-the-art URL methods on
the Atari 100k benchmark. The code is available at
https://github.com/dojeon-ai/SimTPR.Comment: Accepted to ICML 202
Pseudorehearsal in value function approximation
Catastrophic forgetting is of special importance in reinforcement learning,
as the data distribution is generally non-stationary over time. We study and
compare several pseudorehearsal approaches for Q-learning with function
approximation in a pole balancing task. We have found that pseudorehearsal
seems to assist learning even in such very simple problems, given proper
initialization of the rehearsal parameters
- …