Bayesian policy reuse (BPR) is a general policy transfer framework for
selecting a source policy from an offline library by inferring the task belief
based on some observation signals and a trained observation model. In this
paper, we propose an improved BPR method to achieve more efficient policy
transfer in deep reinforcement learning (DRL). First, most BPR algorithms use
the episodic return as the observation signal that contains limited information
and cannot be obtained until the end of an episode. Instead, we employ the
state transition sample, which is informative and instantaneous, as the
observation signal for faster and more accurate task inference. Second, BPR
algorithms usually require numerous samples to estimate the probability
distribution of the tabular-based observation model, which may be expensive and
even infeasible to learn and maintain, especially when using the state
transition sample as the signal. Hence, we propose a scalable observation model
based on fitting state transition functions of source tasks from only a small
number of samples, which can generalize to any signals observed in the target
task. Moreover, we extend the offline-mode BPR to the continual learning
setting by expanding the scalable observation model in a plug-and-play fashion,
which can avoid negative transfer when faced with new unknown tasks.
Experimental results show that our method can consistently facilitate faster
and more efficient policy transfer.Comment: 16 pages, 6 figures, under revie