1 research outputs found
Improved Learning in Evolution Strategies via Sparser Inter-Agent Network Topologies
We draw upon a previously largely untapped literature on human collective
intelligence as a source of inspiration for improving deep learning. Implicit
in many algorithms that attempt to solve Deep Reinforcement Learning (DRL)
tasks is the network of processors along which parameter values are shared. So
far, existing approaches have implicitly utilized fully-connected networks, in
which all processors are connected. However, the scientific literature on human
collective intelligence suggests that complete networks may not always be the
most effective information network structures for distributed search through
complex spaces. Here we show that alternative topologies can improve deep
neural network training: we find that sparser networks learn higher rewards
faster, leading to learning improvements at lower communication costs.Comment: This paper is obsolet