207 research outputs found
Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations
It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings
Learning in clustered spiking networks
Neurons spike on a millisecond time scale while behaviour typically spans hundreds of milliseconds to seconds and longer. Neurons have to bridge this time gap when computing and learning behaviours of interest. Recent computational work has shown that neural circuits can bridge this time gap when connected in specific ways. Moreover, the connectivity patterns can develop using plasticity rules typically considered to be biologically plausible. In this thesis, we focus on one type of connectivity where excitatory neurons are grouped in clusters. Strong recurrent connectivity within the clusters reverberates the activity and prolongs the time scales in the network. This way, the clusters of neurons become the basic functional units of the circuit, in line with an increasing number of experimental studies. We study a general architecture where plastic synapses connect the clustered network to a read-out network. We demonstrate the usefulness of this architecture for two different problems: 1) learning and replaying sequences; 2) learning statistical structure. The time scales in both problems range from hundreds of milliseconds to seconds and we address the problems through simulation and analysis of spiking networks. We show that the clustered organization circumvents the need for non-bio-plausible mathematical optimizations and instead allows the use of unsupervised spike-timing-dependent plasticity rules. Additionally, we make qualitative links to experimental findings and predictions for both problems studied. Finally, we speculate about future directions that could extend upon our findings.Open Acces
An adapting auditory-motor feedback loop can contribute to generating vocal repetition
Consecutive repetition of actions is common in behavioral sequences. Although
integration of sensory feedback with internal motor programs is important for
sequence generation, if and how feedback contributes to repetitive actions is
poorly understood. Here we study how auditory feedback contributes to
generating repetitive syllable sequences in songbirds. We propose that auditory
signals provide positive feedback to ongoing motor commands, but this influence
decays as feedback weakens from response adaptation during syllable
repetitions. Computational models show that this mechanism explains repeat
distributions observed in Bengalese finch song. We experimentally confirmed two
predictions of this mechanism in Bengalese finches: removal of auditory
feedback by deafening reduces syllable repetitions; and neural responses to
auditory playback of repeated syllable sequences gradually adapt in
sensory-motor nucleus HVC. Together, our results implicate a positive
auditory-feedback loop with adaptation in generating repetitive vocalizations,
and suggest sensory adaptation is important for feedback control of motor
sequences
How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation
This paper addresses two questions in the context of neuronal networks
dynamics, using methods from dynamical systems theory and statistical physics:
(i) How to characterize the statistical properties of sequences of action
potentials ("spike trains") produced by neuronal networks ? and; (ii) what are
the effects of synaptic plasticity on these statistics ? We introduce a
framework in which spike trains are associated to a coding of membrane
potential trajectories, and actually, constitute a symbolic coding in important
explicit examples (the so-called gIF models). On this basis, we use the
thermodynamic formalism from ergodic theory to show how Gibbs distributions are
natural probability measures to describe the statistics of spike trains, given
the empirical averages of prescribed quantities. As a second result, we show
that Gibbs distributions naturally arise when considering "slow" synaptic
plasticity rules where the characteristic time for synapse adaptation is quite
longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure
Hierarchical Models in the Brain
This paper describes a general model that subsumes many parametric models for
continuous data. The model comprises hidden layers of state-space or dynamic
causal models, arranged so that the output of one provides input to another. The
ensuing hierarchy furnishes a model for many types of data, of arbitrary
complexity. Special cases range from the general linear model for static data to
generalised convolution models, with system noise, for nonlinear time-series
analysis. Crucially, all of these models can be inverted using exactly the same
scheme, namely, dynamic expectation maximization. This means that a single model
and optimisation scheme can be used to invert a wide range of models. We present
the model and a brief review of its inversion to disclose the relationships
among, apparently, diverse generative models of empirical data. We then show
that this inversion can be formulated as a simple neural network and may provide
a useful metaphor for inference and learning in the brain
Metastable attractors explain the variable timing of stable behavioral action sequences
info:eu-repo/semantics/publishedVersio
Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience
This essay is presented with two principal objectives in mind: first, to
document the prevalence of fractals at all levels of the nervous system, giving
credence to the notion of their functional relevance; and second, to draw
attention to the as yet still unresolved issues of the detailed relationships
among power law scaling, self-similarity, and self-organized criticality. As
regards criticality, I will document that it has become a pivotal reference
point in Neurodynamics. Furthermore, I will emphasize the not yet fully
appreciated significance of allometric control processes. For dynamic fractals,
I will assemble reasons for attributing to them the capacity to adapt task
execution to contextual changes across a range of scales. The final Section
consists of general reflections on the implications of the reviewed data, and
identifies what appear to be issues of fundamental importance for future
research in the rapidly evolving topic of this review
Prediction and memory: A predictive coding account
The hippocampus is crucial for episodic memory, but it is also involved in online prediction. Evidence suggests that a unitary hippocampal code underlies both episodic memory and predictive processing, yet within a predictive coding framework the hippocampal-neocortical interactions that accompany these two phenomena are distinct and opposing. Namely, during episodic recall, the hippocampus is thought to exert an excitatory influence on the neocortex, to reinstate activity patterns across cortical circuits. This contrasts with empirical and theoretical work on predictive processing, where descending predictions suppress prediction errors to ‘explain away’ ascending inputs via cortical inhibition. In this hypothesis piece, we attempt to dissolve this previously overlooked dialectic. We consider how the hippocampus may facilitate both prediction and memory, respectively, by inhibiting neocortical prediction errors or increasing their gain. We propose that these distinct processing modes depend upon the neuromodulatory gain (or precision) ascribed to prediction error units. Within this framework, memory recall is cast as arising from fictive prediction errors that furnish training signals to optimise generative models of the world, in the absence of sensory data
- …