130 research outputs found

    Measuring symmetry, asymmetry and randomness in neural network connectivity

    Get PDF
    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity

    Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses

    Get PDF
    Recent evidence in rodent cerebral cortex and olfactory bulb suggests that short-term dynamics of excitatory synaptic transmission is correlated to stereotypical connectivity motifs. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form unidirectional pairwise connections. The cause of these structural differences in synaptic microcircuits is unknown. We propose that these connectivity motifs emerge from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. We formulate a computational model by combining SD and STDP, which captures faithfully short- and long-term dependence on both spike times and frequency. As a proof of concept, we simulate recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical background inputs, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. This holds for heterogeneous networks including both facilitating and depressing synapses. Our study highlights the conditions under which SD-STDP might the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs. We further suggest experiments for the validation of the proposed mechanism

    Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise

    Get PDF
    Safaryan, K. et al. Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise. Sci. Rep. 7, 46550; doi: 10.1038/srep46550 (2017). This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ © The Author(s) 2017.Many forms of synaptic plasticity require the local production of volatile or rapidly diffusing substances such as nitric oxide. The nonspecific plasticity these neuromodulators may induce at neighboring non-active synapses is thought to be detrimental for the specificity of memory storage. We show here that memory retrieval may benefit from this non-specific plasticity when the applied sparse binary input patterns are degraded by local noise. Simulations of a biophysically realistic model of a cerebellar Purkinje cell in a pattern recognition task show that, in the absence of noise, leakage of plasticity to adjacent synapses degrades the recognition of sparse static patterns. However, above a local noise level of 20 %, the model with nonspecific plasticity outperforms the standard, specific model. The gain in performance is greatest when the spatial distribution of noise in the input matches the range of diffusion-induced plasticity. Hence non-specific plasticity may offer a benefit in noisy environments or when the pressure to generalize is strong.Peer reviewe

    The emergence of functional microcircuits in visual cortex.

    No full text
    Sensory processing occurs in neocortical microcircuits in which synaptic connectivity is highly structured and excitatory neurons form subnetworks that process related sensory information. However, the developmental mechanisms underlying the formation of functionally organized connectivity in cortical microcircuits remain unknown. Here we directly relate patterns of excitatory synaptic connectivity to visual response properties of neighbouring layer 2/3 pyramidal neurons in mouse visual cortex at different postnatal ages, using two-photon calcium imaging in vivo and multiple whole-cell recordings in vitro. Although neural responses were already highly selective for visual stimuli at eye opening, neurons responding to similar visual features were not yet preferentially connected, indicating that the emergence of feature selectivity does not depend on the precise arrangement of local synaptic connections. After eye opening, local connectivity reorganized extensively: more connections formed selectively between neurons with similar visual responses and connections were eliminated between visually unresponsive neurons, but the overall connectivity rate did not change. We propose a sequential model of cortical microcircuit development based on activity-dependent mechanisms of plasticity whereby neurons first acquire feature preference by selecting feedforward inputs before the onset of sensory experience--a process that may be facilitated by early electrical coupling between neuronal subsets--and then patterned input drives the formation of functional subnetworks through a redistribution of recurrent synaptic connections

    Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.

    Get PDF
    The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal) coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain

    Soft-bound synaptic plasticity increases storage capacity

    Get PDF
    Accurate models of synaptic plasticity are essential to understand the adaptive properties of the nervous system and for realistic models of learning and memory. Experiments have shown that synaptic plasticity depends not only on pre- and post-synaptic activity patterns, but also on the strength of the connection itself. Namely, weaker synapses are more easily strengthened than already strong ones. This so called soft-bound plasticity automatically constrains the synaptic strengths. It is known that this has important consequences for the dynamics of plasticity and the synaptic weight distribution, but its impact on information storage is unknown. In this modeling study we introduce an information theoretic framework to analyse memory storage in an online learning setting. We show that soft-bound plasticity increases a variety of performance criteria by about 18% over hard-bound plasticity, and likely maximizes the storage capacity of synapses

    Gradient estimation in dendritic reinforcement learning

    Get PDF
    We study synaptic plasticity in a complex neuronal cell model where NMDA-spikes can arise in certain dendritic zones. In the context of reinforcement learning, two kinds of plasticity rules are derived, zone reinforcement (ZR) and cell reinforcement (CR), which both optimize the expected reward by stochastic gradient ascent. For ZR, the synaptic plasticity response to the external reward signal is modulated exclusively by quantities which are local to the NMDA-spike initiation zone in which the synapse is situated. CR, in addition, uses nonlocal feedback from the soma of the cell, provided by mechanisms such as the backpropagating action potential. Simulation results show that, compared to ZR, the use of nonlocal feedback in CR can drastically enhance learning performance. We suggest that the availability of nonlocal feedback for learning is a key advantage of complex neurons over networks of simple point neurons, which have previously been found to be largely equivalent with regard to computational capability

    Storage of Correlated Patterns in Standard and Bistable Purkinje Cell Models

    Get PDF
    The cerebellum has long been considered to undergo supervised learning, with climbing fibers acting as a ‘teaching’ or ‘error’ signal. Purkinje cells (PCs), the sole output of the cerebellar cortex, have been considered as analogs of perceptrons storing input/output associations. In support of this hypothesis, a recent study found that the distribution of synaptic weights of a perceptron at maximal capacity is in striking agreement with experimental data in adult rats. However, the calculation was performed using random uncorrelated inputs and outputs. This is a clearly unrealistic assumption since sensory inputs and motor outputs carry a substantial degree of temporal correlations. In this paper, we consider a binary output neuron with a large number of inputs, which is required to store associations between temporally correlated sequences of binary inputs and outputs, modelled as Markov chains. Storage capacity is found to increase with both input and output correlations, and diverges in the limit where both go to unity. We also investigate the capacity of a bistable output unit, since PCs have been shown to be bistable in some experimental conditions. Bistability is shown to enhance storage capacity whenever the output correlation is stronger than the input correlation. Distribution of synaptic weights at maximal capacity is shown to be independent on correlations, and is also unaffected by the presence of bistability

    Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process

    Get PDF
    Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein–Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein–Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential

    Spatio-Temporal Credit Assignment in Neuronal Population Learning

    Get PDF
    In learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain
    corecore