21,108 research outputs found
Towards Learning to Speak and Hear Through Multi-Agent Communication over a Continuous Acoustic Channel
While multi-agent reinforcement learning has been used as an effective means
to study emergent communication between agents, existing work has focused
almost exclusively on communication with discrete symbols. Human communication
often takes place (and emerged) over a continuous acoustic channel; human
infants acquire language in large part through continuous signalling with their
caregivers. We therefore ask: Are we able to observe emergent language between
agents with a continuous communication channel trained through reinforcement
learning? And if so, what is the impact of channel characteristics on the
emerging language? We propose an environment and training methodology to serve
as a means to carry out an initial exploration of these questions. We use a
simple messaging environment where a "speaker" agent needs to convey a concept
to a "listener". The Speaker is equipped with a vocoder that maps symbols to a
continuous waveform, this is passed over a lossy continuous channel, and the
Listener needs to map the continuous signal to the concept. Using deep
Q-learning, we show that basic compositionality emerges in the learned language
representations. We find that noise is essential in the communication channel
when conveying unseen concept combinations. And we show that we can ground the
emergent communication by introducing a caregiver predisposed to "hearing" or
"speaking" English. Finally, we describe how our platform serves as a starting
point for future work that uses a combination of deep reinforcement learning
and multi-agent systems to study our questions of continuous signalling in
language learning and emergence.Comment: 12 pages, 6 figures, 3 tables; under review as a conference paper at
ICLR 202
Emergent Quantized Communication
The field of emergent communication aims to understand the characteristics of
communication as it emerges from artificial agents solving tasks that require
information exchange. Communication with discrete messages is considered a
desired characteristic, for both scientific and applied reasons. However,
training a multi-agent system with discrete communication is not
straightforward, requiring either reinforcement learning algorithms or relaxing
the discreteness requirement via a continuous approximation such as the
Gumbel-softmax. Both these solutions result in poor performance compared to
fully continuous communication. In this work, we propose an alternative
approach to achieve discrete communication -- quantization of communicated
messages. Using message quantization allows us to train the model end-to-end,
achieving superior performance in multiple setups. Moreover, quantization is a
natural framework that runs the gamut from continuous to discrete
communication. Thus, it sets the ground for a broader view of multi-agent
communication in the deep learning era
Knowledge Distillation from Language-Oriented to Emergent Communication for Multi-Agent Remote Control
In this work, we compare emergent communication (EC) built upon multi-agent
deep reinforcement learning (MADRL) and language-oriented semantic
communication (LSC) empowered by a pre-trained large language model (LLM) using
human language. In a multi-agent remote navigation task, with multimodal input
data comprising location and channel maps, it is shown that EC incurs high
training cost and struggles when using multimodal data, whereas LSC yields high
inference computing cost due to the LLM's large size. To address their
respective bottlenecks, we propose a novel framework of language-guided EC
(LEC) by guiding the EC training using LSC via knowledge distillation (KD).
Simulations corroborate that LEC achieves faster travel time while avoiding
areas with poor channel conditions, as well as speeding up the MADRL training
convergence by up to 61.8% compared to EC
- …