5 research outputs found

    Probabilistic versus incremental presynaptic learning in biological plausible synapses

    Get PDF
    In this paper, the presynaptic rule, a classical rule for hebbian learning, is revisited. It is shown that the presynaptic rule exhibits relevant synaptic properties like synaptic directionality, and LTP metaplasticity (long-term potentiation threshold metaplasticity). With slight modifications, the presynaptic model also exhibits metaplasticity of the long-term depression threshold, being also consistent with Artola, Brocher and Singer’s (ABS) influential model. Two asymptotically equivalent versions of the presynaptic rule were adopted for this analysis: the first one uses an incremental equation while the second, conditional probabilities. Despite their simplicity, both types of presynaptic rules exhibit sophisticated biological properties, specially the probabilistic versio

    Benchmarking Hebbian learning rules for associative memory

    Full text link
    Associative memory or content addressable memory is an important component function in computer science and information processing and is a key concept in cognitive and computational brain science. Many different neural network architectures and learning rules have been proposed to model associative memory of the brain while investigating key functions like pattern completion and rivalry, noise reduction, and storage capacity. A less investigated but important function is prototype extraction where the training set comprises pattern instances generated by distorting prototype patterns and the task of the trained network is to recall the correct prototype pattern given a new instance. In this paper we characterize these different aspects of associative memory performance and benchmark six different learning rules on storage capacity and prototype extraction. We consider only models with Hebbian plasticity that operate on sparse distributed representations with unit activities in the interval [0,1]. We evaluate both non-modular and modular network architectures and compare performance when trained and tested on different kinds of sparse random binary pattern sets, including correlated ones. We show that covariance learning has a robust but low storage capacity under these conditions and that the Bayesian Confidence Propagation learning rule (BCPNN) is superior with a good margin in all cases except one, reaching a three times higher composite score than the second best learning rule tested.Comment: 24 pages, 9 figure

    Covariance Learning of Correlated Patterns in Competitive Networks

    No full text
    (To appearinNeural Computation) Covariance learning is a specially powerful type of Hebbian learning, allowing both potentiation and depression of synaptic strength. It is used for associative memory in feed-forward and recurrent neural network paradigms. This letter describes a variant ofcovariance learning which works particularly well for correlated stimuli in feed-forward networks with competitive K-of-N ring. The rule, which is nonlinear, has an intuitive mathematical interpretation, and simulations presented in this letter demonstrate its utility.
    corecore