In this paper, we investigate whether artificial agents can develop a shared
language in an ecological setting where communication relies on a sensory-motor
channel. To this end, we introduce the Graphical Referential Game (GREG) where
a speaker must produce a graphical utterance to name a visual referent object
while a listener has to select the corresponding object among distractor
referents, given the delivered message. The utterances are drawing images
produced using dynamical motor primitives combined with a sketching library. To
tackle GREG we present CURVES: a multimodal contrastive deep learning mechanism
that represents the energy (alignment) between named referents and utterances
generated through gradient ascent on the learned energy landscape. We
demonstrate that CURVES not only succeeds at solving the GREG but also enables
agents to self-organize a language that generalizes to feature compositions
never seen during training. In addition to evaluating the communication
performance of our approach, we also explore the structure of the emerging
language. Specifically, we show that the resulting language forms a coherent
lexicon shared between agents and that basic compositional rules on the
graphical productions could not explain the compositional generalization