499 research outputs found
Structural Inductive Biases in Emergent Communication
In order to communicate, humans flatten a complex representation of ideas and
their attributes into a single word or a sentence. We investigate the impact of
representation learning in artificial agents by developing graph referential
games. We empirically show that agents parametrized by graph neural networks
develop a more compositional language compared to bag-of-words and sequence
models, which allows them to systematically generalize to new combinations of
familiar features.Comment: The first two authors contributed equally. Poster presented at CogSci
202
Contrastive Multimodal Learning for Emergence of Graphical Sensory-Motor Communication
In this paper, we investigate whether artificial agents can develop a shared
language in an ecological setting where communication relies on a sensory-motor
channel. To this end, we introduce the Graphical Referential Game (GREG) where
a speaker must produce a graphical utterance to name a visual referent object
while a listener has to select the corresponding object among distractor
referents, given the delivered message. The utterances are drawing images
produced using dynamical motor primitives combined with a sketching library. To
tackle GREG we present CURVES: a multimodal contrastive deep learning mechanism
that represents the energy (alignment) between named referents and utterances
generated through gradient ascent on the learned energy landscape. We
demonstrate that CURVES not only succeeds at solving the GREG but also enables
agents to self-organize a language that generalizes to feature compositions
never seen during training. In addition to evaluating the communication
performance of our approach, we also explore the structure of the emerging
language. Specifically, we show that the resulting language forms a coherent
lexicon shared between agents and that basic compositional rules on the
graphical productions could not explain the compositional generalization
- …