1 research outputs found
Skill Transfer in Deep Reinforcement Learning under Morphological Heterogeneity
Transfer learning methods for reinforcement learning (RL) domains facilitate
the acquisition of new skills using previously acquired knowledge. The vast
majority of existing approaches assume that the agents have the same design,
e.g. same shape and action spaces. In this paper we address the problem of
transferring previously acquired skills amongst morphologically different
agents (MDAs). For instance, assuming that a bipedal agent has been trained to
move forward, could this skill be transferred on to a one-leg hopper so as to
make its training process for the same task more sample efficient? We frame
this problem as one of subspace learning whereby we aim to infer latent factors
representing the control mechanism that is common between MDAs. We propose a
novel paired variational encoder-decoder model, PVED, that disentangles the
control of MDAs into shared and agent-specific factors. The shared factors are
then leveraged for skill transfer using RL. Theoretically, we derive a theorem
indicating how the performance of PVED depends on the shared factors and agent
morphologies. Experimentally, PVED has been extensively validated on four
MuJoCo environments. We demonstrate its performance compared to a
state-of-the-art approach and several ablation cases, visualize and interpret
the hidden factors, and identify avenues for future improvements