329 research outputs found

    Memristor-based Synaptic Networks and Logical Operations Using In-Situ Computing

    Get PDF
    We present new computational building blocks based on memristive devices. These blocks, can be used to implement either supervised or unsupervised learning modules. This is achieved using a crosspoint architecture which is an efficient array implementation for nanoscale two-terminal memristive devices. Based on these blocks and an experimentally verified SPICE macromodel for the memristor, we demonstrate that firstly, the Spike-Timing-Dependent Plasticity (STDP) can be implemented by a single memristor device and secondly, a memristor-based competitive Hebbian learning through STDP using a 1×10001\times 1000 synaptic network. This is achieved by adjusting the memristor's conductance values (weights) as a function of the timing difference between presynaptic and postsynaptic spikes. These implementations have a number of shortcomings due to the memristor's characteristics such as memory decay, highly nonlinear switching behaviour as a function of applied voltage/current, and functional uniformity. These shortcomings can be addressed by utilising a mixed gates that can be used in conjunction with the analogue behaviour for biomimetic computation. The digital implementations in this paper use in-situ computational capability of the memristor.Comment: 18 pages, 7 figures, 2 table

    Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    Get PDF
    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines, a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. Synaptic sampling machines perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate & fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based synaptic sampling machines outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    A new stochastic STDP Rule in a neural Network Model

    Full text link
    Thought to be responsible for memory, synaptic plasticity has been widely studied in the past few decades. One example of plasticity models is the popular Spike Timing Dependent Plasticity (STDP). The huge litterature of STDP models are mainly based deterministic rules whereas the biological mechanisms involved are mainly stochastic ones. Moreover, there exist only few mathematical studies on plasticity taking into account the precise spikes timings. In this article, we aim at proposing a new stochastic STDP rule with discrete synaptic weights which allows a mathematical analysis of the full network dynamics under the hypothesis of separation of timescales. This model attempts to answer the need for understanding the interplay between the weights dynamics and the neurons ones

    Phenomenological models of synaptic plasticity based on spike timing

    Get PDF
    Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulation

    Burst-Time-Dependent Plasticity Robustly Guides ON/OFF Segregation in the Lateral Geniculate Nucleus

    Get PDF
    Spontaneous retinal activity (known as “waves”) remodels synaptic connectivity to the lateral geniculate nucleus (LGN) during development. Analysis of retinal waves recorded with multielectrode arrays in mouse suggested that a cue for the segregation of functionally distinct (ON and OFF) retinal ganglion cells (RGCs) in the LGN may be a desynchronization in their firing, where ON cells precede OFF cells by one second. Using the recorded retinal waves as input, with two different modeling approaches we explore timing-based plasticity rules for the evolution of synaptic weights to identify key features underlying ON/OFF segregation. First, we analytically derive a linear model for the evolution of ON and OFF weights, to understand how synaptic plasticity rules extract input firing properties to guide segregation. Second, we simulate postsynaptic activity with a nonlinear integrate-and-fire model to compare findings with the linear model. We find that spike-time-dependent plasticity, which modifies synaptic weights based on millisecond-long timing and order of pre- and postsynaptic spikes, fails to segregate ON and OFF retinal inputs in the absence of normalization. Implementing homeostatic mechanisms results in segregation, but only with carefully-tuned parameters. Furthermore, extending spike integration timescales to match the second-long input correlation timescales always leads to ON segregation because ON cells fire before OFF cells. We show that burst-time-dependent plasticity can robustly guide ON/OFF segregation in the LGN without normalization, by integrating pre- and postsynaptic bursts irrespective of their firing order and over second-long timescales. We predict that an LGN neuron will become ON- or OFF-responsive based on a local competition of the firing patterns of neighboring RGCs connecting to it. Finally, we demonstrate consistency with ON/OFF segregation in ferret, despite differences in the firing properties of retinal waves. Our model suggests that diverse input statistics of retinal waves can be robustly interpreted by a burst-based rule, which underlies retinogeniculate plasticity across different species
    corecore