1,794 research outputs found

    Neural Network Models of Learning and Memory: Leading Questions and an Emerging Framework

    Full text link
    Office of Naval Research and the Defense Advanced Research Projects Agency (N00014-95-1-0409, N00014-1-95-0657); National Institutes of Health (NIH 20-316-4304-5

    Learning arbitrary functions with spike-timing dependent plasticity learning rule

    Get PDF
    A neural network model based on spike-timing-dependent plasticity (STOP) learning rule, where afferent neurons will excite both the target neuron and interneurons that in turn project to the target neuron, is applied to the tasks of learning AND and XOR functions. Without inhibitory plasticity, the network can learn both AND and XOR functions. Introducing inhibitory plasticity can improve the performance of learning XOR function. Maintaining a training pattern set is a method to get feedback of network performance, and will always improve network performance. © 2005 IEEE

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Biologically plausible attractor networks

    Get PDF
    Attractor networks have shownmuch promise as a neural network architecture that can describe many aspects of brain function. Much of the field of study around these networks has coalesced around pioneering work done by John Hoprield, and therefore many approaches have been strongly linked to the field of statistical physics. In this thesis I use existing theoretical and statistical notions of attractor networks, and introduce several biologically inspired extensions to an attractor network for which a mean-field solution has been previously derived. This attractor network is a computational neuroscience model that accounts for decision-making in the situation of two competing stimuli. By basing our simulation studies on such a network, we are able to study situations where mean- field solutions have been derived, and use these as the starting case, which we then extend with large scale integrate-and-fire attractor network simulations. The simulations are large enough to provide evidence that the results apply to networks of the size found in the brain. One factor that has been highlighted by previous research to be very important to brain function is that of noise. Spiking-related noise is seen to be a factor that influences processes such as decision-making, signal detection, short-term memory, and memory recall even with the quite large networks found in the cerebral cortex, and this thesis aims to measure the effects of noise on biologically plausible attractor networks. Our results are obtained using a spiking neural network made up of integrate-and-fire neurons, and we focus our results on the stochastic transition that this network undergoes. In this thesis we examine two such processes that are biologically relevant, but for which no mean-field solutions yet exist: graded firing rates, and diluted connectivity. Representations in the cortex are often graded, and we find that noise in these networks may be larger than with binary representations. In further investigations it was shown that diluted connectivity reduces the effects of noise in the situation where the number of synapses onto each neuron is held constant. In this thesis we also use the same attractor network framework to investigate the Communication through Coherence hypothesis. The Communication through Coherence hypothesis states that synchronous oscillations, especially in the gamma range, can facilitate communication between neural systems. It is shown that information transfer from one network to a second network occurs for a much lower strength of synaptic coupling between the networks than is required to produce coherence. Thus, information transmission can occur before any coherence is produced. This indicates that coherence is not needed for information transmission between coupled networks. This raises a major question about the Communication through Coherence hypothesis. Overall, the results provide substantial contributions towards understanding operation of attractor neuronal networks in the brain

    Excitatory, Inhibitory, and Structural Plasticity Produce Correlated Connectivity in Random Networks Trained to Solve Paired-Stimulus Tasks

    Get PDF
    The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks
    • …
    corecore