During the past few years, probabilistic approaches to imitation learning
have earned a relevant place in the literature. One of their most prominent
features, in addition to extracting a mean trajectory from task demonstrations,
is that they provide a variance estimation. The intuitive meaning of this
variance, however, changes across different techniques, indicating either
variability or uncertainty. In this paper we leverage kernelized movement
primitives (KMP) to provide a new perspective on imitation learning by
predicting variability, correlations and uncertainty about robot actions. This
rich set of information is used in combination with optimal controller fusion
to learn actions from data, with two main advantages: i) robots become safe
when uncertain about their actions and ii) they are able to leverage partial
demonstrations, given as elementary sub-tasks, to optimally perform a higher
level, more complex task. We showcase our approach in a painting task, where a
human user and a KUKA robot collaborate to paint a wooden board. The task is
divided into two sub-tasks and we show that using our approach the robot
becomes compliant (hence safe) outside the training regions and executes the
two sub-tasks with optimal gains.Comment: Submitted to IROS1