Learning from Demonstration (LfD) enables robots to acquire versatile skills
by learning motion policies from human demonstrations. It endows users with an
intuitive interface to transfer new skills to robots without the need for
time-consuming robot programming and inefficient solution exploration. During
task executions, the robot motion is usually influenced by constraints imposed
by environments. In light of this, task-parameterized LfD (TP-LfD) encodes
relevant contextual information into reference frames, enabling better skill
generalization to new situations. However, most TP-LfD algorithms typically
require multiple demonstrations across various environmental conditions to
ensure sufficient statistics for a meaningful model. It is not a trivial task
for robot users to create different situations and perform demonstrations under
all of them. Therefore, this paper presents a novel algorithm to learn skills
from few demonstrations. By leveraging the reference frame weights that capture
the frame importance or relevance during task executions, our method
demonstrates excellent skill acquisition performance, which is validated in
real robotic environments.Comment: Accepted by ISER. For the experiment video, see
https://youtu.be/JpGjk4eKC3