Recurrent neural networks (RNNs) in the brain and in silico excel at solving
tasks with intricate temporal dependencies. Long timescales required for
solving such tasks can arise from properties of individual neurons
(single-neuron timescale, τ, e.g., membrane time constant in biological
neurons) or recurrent interactions among them (network-mediated timescale).
However, the contribution of each mechanism for optimally solving
memory-dependent tasks remains poorly understood. Here, we train RNNs to solve
N-parity and N-delayed match-to-sample tasks with increasing memory
requirements controlled by N by simultaneously optimizing recurrent weights
and τs. We find that for both tasks RNNs develop longer timescales with
increasing N, but depending on the learning objective, they use different
mechanisms. Two distinct curricula define learning objectives: sequential
learning of a single-N (single-head) or simultaneous learning of multiple
Ns (multi-head). Single-head networks increase their τ with N and are
able to solve tasks for large N, but they suffer from catastrophic
forgetting. However, multi-head networks, which are explicitly required to hold
multiple concurrent memories, keep τ constant and develop longer
timescales through recurrent connectivity. Moreover, we show that the
multi-head curriculum increases training speed and network stability to
ablations and perturbations, and allows RNNs to generalize better to tasks
beyond their training regime. This curriculum also significantly improves
training GRUs and LSTMs for large-N tasks. Our results suggest that adapting
timescales to task requirements via recurrent interactions allows learning more
complex objectives and improves the RNN's performance