13 research outputs found
Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks
Recurrent neural networks (RNNs) in the brain and in silico excel at solving
tasks with intricate temporal dependencies. Long timescales required for
solving such tasks can arise from properties of individual neurons
(single-neuron timescale, , e.g., membrane time constant in biological
neurons) or recurrent interactions among them (network-mediated timescale).
However, the contribution of each mechanism for optimally solving
memory-dependent tasks remains poorly understood. Here, we train RNNs to solve
-parity and -delayed match-to-sample tasks with increasing memory
requirements controlled by by simultaneously optimizing recurrent weights
and s. We find that for both tasks RNNs develop longer timescales with
increasing , but depending on the learning objective, they use different
mechanisms. Two distinct curricula define learning objectives: sequential
learning of a single- (single-head) or simultaneous learning of multiple
s (multi-head). Single-head networks increase their with and are
able to solve tasks for large , but they suffer from catastrophic
forgetting. However, multi-head networks, which are explicitly required to hold
multiple concurrent memories, keep constant and develop longer
timescales through recurrent connectivity. Moreover, we show that the
multi-head curriculum increases training speed and network stability to
ablations and perturbations, and allows RNNs to generalize better to tasks
beyond their training regime. This curriculum also significantly improves
training GRUs and LSTMs for large- tasks. Our results suggest that adapting
timescales to task requirements via recurrent interactions allows learning more
complex objectives and improves the RNN's performance
Intrinsic timescales in the visual cortex change with selective attention and reflect spatial connectivity
Estimation of autocorrelation timescales with Approximate Bayesian Computations
Timescales characterize the pace of change for many dynamic processes in nature: radioactive decay, metabolization of substances, memory decay in neural systems, and epidemic spreads. Measuring timescales from experimental data can reveal underlying mechanisms and constrain theoretical models. Timescales are usually estimated by fitting the autocorrelation of sample time-series with exponential decay functions. We show that this standard procedure often fails to recover the correct timescales, exhibiting large estimation errors due to a statistical bias in autocorrelations of finite data samples. To overcome this bias, we develop a method using adaptive Approximate Bayesian Computations. Our method estimates the timescales by fitting the autocorrelation of sample data with a generative model based on a mixture of Ornstein-Uhlenbeck processes. The method accounts for finite sample size and noise in data and returns a posterior distribution of timescales quantifying the estimation uncertainty. We demonstrate how the posterior distribution can be used for model selection to compare alternative hypotheses about the dynamics of the underlying process. Our method accurately recovers the correct timescales on synthetic data from various processes with known ground truth dynamics. We illustrate its application to electrophysiological recordings from the primate cortex