342 research outputs found

    Optimal learning rules for discrete synapses

    Get PDF
    There is evidence that biological synapses have a limited number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights, as old memories are automatically overwritten by new memories. Consequently, there has been substantial discussion about how this affects learning and storage capacity. In this paper, we calculate the storage capacity of discrete, bounded synapses in terms of Shannon information. We use this to optimize the learning rules and investigate how the maximum information capacity depends on the number of synapses, the number of synaptic states, and the coding sparseness. Below a certain critical number of synapses per neuron (comparable to numbers found in biology), we find that storage is similar to unbounded, continuous synapses. Hence, discrete synapses do not necessarily have lower storage capacity

    The Effect of Different Forms of Synaptic Plasticity on Pattern Recognition in the Cerebellar Cortex

    Get PDF
    “The original publication is available at www.springerlink.com”. Copyright Springer.Many cerebellar learning theories assume that long-term depression (LTD) of synapses between parallel fibres (PFs) and Purkinje cells (PCs) provides the basis for pattern recognition in the cerebellum. Previous work has suggested that PCs can use a novel neural code based on the duration of silent periods. These simulations have used a simplified learning rule, where the synaptic conductance was halved each time a pattern was learned. However, experimental studies in cerebellar slices show that the synaptic conductance saturates and is rarely reduced to less than 50% of its baseline value. Moreover, the previous simulations did not include plasticity of the synapses between inhibitory interneurons and PCs. Here we study the effect of LTD saturation and inhibitory synaptic plasticity on pattern recognition in a complex PC model. We find that the PC model is very sensitive to the value at which LTD saturates, but is unaffected by inhibitory synaptic plasticity.Peer reviewe

    Binary Willshaw learning yields high synaptic capacity for long-term familiarity memory

    Full text link
    We investigate from a computational perspective the efficiency of the Willshaw synaptic update rule in the context of familiarity discrimination, a binary-answer, memory-related task that has been linked through psychophysical experiments with modified neural activity patterns in the prefrontal and perirhinal cortex regions. Our motivation for recovering this well-known learning prescription is two-fold: first, the switch-like nature of the induced synaptic bonds, as there is evidence that biological synaptic transitions might occur in a discrete stepwise fashion. Second, the possibility that in the mammalian brain, unused, silent synapses might be pruned in the long-term. Besides the usual pattern and network capacities, we calculate the synaptic capacity of the model, a recently proposed measure where only the functional subset of synapses is taken into account. We find that in terms of network capacity, Willshaw learning is strongly affected by the pattern coding rates, which have to be kept fixed and very low at any time to achieve a non-zero capacity in the large network limit. The information carried per functional synapse, however, diverges and is comparable to that of the pattern association case, even for more realistic moderately low activity levels that are a function of network size.Comment: 20 pages, 4 figure

    Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise

    Get PDF
    Safaryan, K. et al. Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise. Sci. Rep. 7, 46550; doi: 10.1038/srep46550 (2017). This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ © The Author(s) 2017.Many forms of synaptic plasticity require the local production of volatile or rapidly diffusing substances such as nitric oxide. The nonspecific plasticity these neuromodulators may induce at neighboring non-active synapses is thought to be detrimental for the specificity of memory storage. We show here that memory retrieval may benefit from this non-specific plasticity when the applied sparse binary input patterns are degraded by local noise. Simulations of a biophysically realistic model of a cerebellar Purkinje cell in a pattern recognition task show that, in the absence of noise, leakage of plasticity to adjacent synapses degrades the recognition of sparse static patterns. However, above a local noise level of 20 %, the model with nonspecific plasticity outperforms the standard, specific model. The gain in performance is greatest when the spatial distribution of noise in the input matches the range of diffusion-induced plasticity. Hence non-specific plasticity may offer a benefit in noisy environments or when the pressure to generalize is strong.Peer reviewe

    Learning flexible sensori-motor mappings in a complex network

    Get PDF
    Given the complex structure of the brain, how can synaptic plasticity explain the learning and forgetting of associations when these are continuously changing? We address this question by studying different reinforcement learning rules in a multilayer network in order to reproduce monkey behavior in a visuomotor association task. Our model can only reproduce the learning performance of the monkey if the synaptic modifications depend on the pre- and postsynaptic activity, and if the intrinsic level of stochasticity is low. This favored learning rule is based on reward modulated Hebbian synaptic plasticity and shows the interesting feature that the learning performance does not substantially degrade when adding layers to the network, even for a complex proble

    How much of the Hippocampus can be Explained by Functional Constraints?

    Get PDF
    In the spirit of Marr, we discuss an information-theoretic approach that derives, from the role of the hippocampus in memory, constraints on its anatomical and physiological structure. The observed structure is consistent with such constraints, and, further, we relate the quantitative arguments developed in earlier analytical studies to experimental measures extracted from neuronal recordings in the behaving rat

    The Importance of Forgetting: Limiting Memory Improves Recovery of Topological Characteristics from Neural Data

    Full text link
    We develop of a line of work initiated by Curto and Itskov towards understanding the amount of information contained in the spike trains of hippocampal place cells via topology considerations. Previously, it was established that simply knowing which groups of place cells fire together in an animal's hippocampus is sufficient to extract the global topology of the animal's physical environment. We model a system where collections of place cells group and ungroup according to short-term plasticity rules. In particular, we obtain the surprising result that in experiments with spurious firing, the accuracy of the extracted topological information decreases with the persistence (beyond a certain regime) of the cell groups. This suggests that synaptic transience, or forgetting, is a mechanism by which the brain counteracts the effects of spurious place cell activity

    Structural Plasticity and Associative Memory in Balanced Neural Networks With Spike-Time Dependent Inhibitory Plasticity

    Get PDF
    Several homeostatic mechanisms enable the brain to maintain desired levels of neuronal activity. One of these, homeostatic structural plasticity, has been reported to restore activity in networks disrupted by peripheral lesions by altering their neuronal connectivity. While multiple lesion experiments have studied the changes in neurite morphology that underlie modifications of synapses in these networks, the underlying mechanisms that drive these changes and the effects of the altered connectivity on network function are yet to be explained. Experimental evidence suggests that neuronal activity modulates neurite morphology and that it may stimulate neurites to selectively sprout or retract to restore network activity levels. In this study, a new spiking network model was developed to investigate these activity dependent growth regimes of neurites. Simulations of the model accurately reproduce network rewiring after peripheral lesions as reported in experiments. To ensure that these simulations closely resembled the behaviour of networks in the brain, a biologically realistic network model that exhibits low frequency Asynchronous Irregular (AI) activity as observed in cerebral cortex was deafferented. Furthermore, to study the functional effects of peripheral lesioning and subsequent network repair by homeostatic structural plasticity, associative memories were stored in the network and their recall performances before deafferentation and after, during the repair process, were compared. The simulation results indicate that the re-establishment of activity in neurons both within and outside the deprived region, the Lesion Projection Zone (LPZ), requires opposite activity dependent growth rules for excitatory and inhibitory post-synaptic elements. Analysis of these growth regimes indicates that they also contribute to the maintenance of activity levels in individual neurons. In this model, the directional formation of synapses that is observed in experiments requires that pre-synaptic excitatory and inhibitory elements also follow opposite growth rules. Furthermore, it was observed that the proposed model of homeostatic structural plasticity and the inhibitory synaptic plasticity mechanism that also balances the AI network are both necessary for successful rewiring. Next, even though average activity was restored to deprived neurons, these neurons did not retain their AI firing characteristics after repair. Finally, the recall performance of associative memories, which deteriorated after deafferentation, was not restored after network reorganisation
    corecore