7 research outputs found
Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks
Recurrent neural networks (RNNs) in the brain and in silico excel at solving
tasks with intricate temporal dependencies. Long timescales required for
solving such tasks can arise from properties of individual neurons
(single-neuron timescale, , e.g., membrane time constant in biological
neurons) or recurrent interactions among them (network-mediated timescale).
However, the contribution of each mechanism for optimally solving
memory-dependent tasks remains poorly understood. Here, we train RNNs to solve
-parity and -delayed match-to-sample tasks with increasing memory
requirements controlled by by simultaneously optimizing recurrent weights
and s. We find that for both tasks RNNs develop longer timescales with
increasing , but depending on the learning objective, they use different
mechanisms. Two distinct curricula define learning objectives: sequential
learning of a single- (single-head) or simultaneous learning of multiple
s (multi-head). Single-head networks increase their with and are
able to solve tasks for large , but they suffer from catastrophic
forgetting. However, multi-head networks, which are explicitly required to hold
multiple concurrent memories, keep constant and develop longer
timescales through recurrent connectivity. Moreover, we show that the
multi-head curriculum increases training speed and network stability to
ablations and perturbations, and allows RNNs to generalize better to tasks
beyond their training regime. This curriculum also significantly improves
training GRUs and LSTMs for large- tasks. Our results suggest that adapting
timescales to task requirements via recurrent interactions allows learning more
complex objectives and improves the RNN's performance
Environmental variability and network structure determine the optimal plasticity mechanisms in embodied agents
Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks
The dynamical regime and its importance for evolvability, task performance and generalization
It has long been hypothesized that operating close to the critical state is
beneficial for natural and artificial systems. We test this hypothesis by
evolving foraging agents controlled by neural networks that can change the
system's dynamical regime throughout evolution. Surprisingly, we find that all
populations, regardless of their initial regime, evolve to be subcritical in
simple tasks and even strongly subcritical populations can reach comparable
performance. We hypothesize that the moderately subcritical regime combines the
benefits of generalizability and adaptability brought by closeness to
criticality with the stability of the dynamics characteristic for subcritical
systems. By a resilience analysis, we find that initially critical agents
maintain their fitness level even under environmental changes and degrade
slowly with increasing perturbation strength. On the other hand, subcritical
agents originally evolved to the same fitness, were often rendered utterly
inadequate and degraded faster. We conclude that although the subcritical
regime is preferable for a simple task, the optimal deviation from criticality
depends on the task difficulty: for harder tasks, agents evolve closer to
criticality. Furthermore, subcritical populations cannot find the path to
decrease their distance to criticality. In summary, our study suggests that
initializing models near criticality is important to find an optimal and
flexible solution.Comment: 8 Pages, 7 Figures, Artificial Life Conference 202
Computational modelling of the long-term effects of brain stimulation on the local and global structural connectivity of epileptic patients.
Computational studies of the influence of different network parameters on the dynamic and topological network effects of brain stimulation can enhance our understanding of different outcomes between individuals. In this study, a brain stimulation session along with the subsequent post-stimulation brain activity is simulated for a period of one day using a network of modified Wilson-Cowan oscillators coupled according to diffusion imaging based structural connectivity. We use this computational model to examine how differences in the inter-region connectivity and the excitability of stimulated regions at the time of stimulation can affect post-stimulation behaviours. Our findings indicate that the initial inter-region connectivity can heavily affect the changes that stimulation induces in the connectivity of the network. Moreover, differences in the excitability of the stimulated regions seem to lead to different post-stimulation connectivity changes across the model network, including on the internal connectivity of non-stimulated regions