In recent years, much progress has been made in learning robotic manipulation
policies that follow natural language instructions. Such methods typically
learn from corpora of robot-language data that was either collected with
specific tasks in mind or expensively re-labelled by humans with rich language
descriptions in hindsight. Recently, large-scale pretrained vision-language
models (VLMs) like CLIP or ViLD have been applied to robotics for learning
representations and scene descriptors. Can these pretrained models serve as
automatic labelers for robot data, effectively importing Internet-scale
knowledge into existing datasets to make them useful even for tasks that are
not reflected in their ground truth annotations? To accomplish this, we
introduce Data-driven Instruction Augmentation for Language-conditioned control
(DIAL): we utilize semi-supervised language labels leveraging the semantic
understanding of CLIP to propagate knowledge onto large datasets of unlabelled
demonstration data and then train language-conditioned policies on the
augmented datasets. This method enables cheaper acquisition of useful language
descriptions compared to expensive human labels, allowing for more efficient
label coverage of large-scale datasets. We apply DIAL to a challenging
real-world robotic manipulation domain where 96.5% of the 80,000 demonstrations
do not contain crowd-sourced language annotations. DIAL enables imitation
learning policies to acquire new capabilities and generalize to 60 novel
instructions unseen in the original dataset