How to learn an effective reinforcement learning-based model for control
tasks from high-level visual observations is a practical and challenging
problem. A key to solving this problem is to learn low-dimensional state
representations from observations, from which an effective policy can be
learned. In order to boost the learning of state encoding, recent works are
focused on capturing behavioral similarities between state representations or
applying data augmentation on visual observations. In this paper, we propose a
novel meta-learner-based framework for representation learning regarding
behavioral similarities for reinforcement learning. Specifically, our framework
encodes the high-dimensional observations into two decomposed embeddings
regarding reward and dynamics in a Markov Decision Process (MDP). A pair of
meta-learners are developed, one of which quantifies the reward similarity and
the other quantifies dynamics similarity over the correspondingly decomposed
embeddings. The meta-learners are self-learned to update the state embeddings
by approximating two disjoint terms in on-policy bisimulation metric. To
incorporate the reward and dynamics terms, we further develop a strategy to
adaptively balance their impacts based on different tasks or environments. We
empirically demonstrate that our proposed framework outperforms
state-of-the-art baselines on several benchmarks, including conventional DM
Control Suite, Distracting DM Control Suite and a self-driving task CARLA