97 research outputs found

    Branch-specific plasticity enables self-organization of nonlinear computation in single neurons

    Get PDF
    It has been conjectured that nonlinear processing in dendritic branches endows individual neurons with the capability to perform complex computational operations that are needed in order to solve for example the binding problem. However, it is not clear how single neurons could acquire such functionality in a self-organized manner, since most theoretical studies of synaptic plasticity and learning concentrate on neuron models without nonlinear dendritic properties. In the meantime, a complex picture of information processing with dendritic spikes and a variety of plasticity mechanisms in single neurons has emerged from experiments. In particular, new experimental data on dendritic branch strength potentiation in rat hippocampus have not yet been incorporated into such models. In this article, we investigate how experimentally observed plasticity mechanisms, such as depolarization-dependent STDP and branch-strength potentiation could be integrated to self-organize nonlinear neural computations with dendritic spikes. We provide a mathematical proof that in a simplified setup these plasticity mechanisms induce a competition between dendritic branches, a novel concept in the analysis of single neuron adaptivity. We show via computer simulations that such dendritic competition enables a single neuron to become member of several neuronal ensembles, and to acquire nonlinear computational capabilities, such as for example the capability to bind multiple input features. Hence our results suggest that nonlinear neural computation may self-organize in single neurons through the interaction of local synaptic and dendritic plasticity mechanisms

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Maturation of GABAergic Inhibition Promotes Strengthening of Temporally Coherent Inputs among Convergent Pathways

    Get PDF
    Spike-timing-dependent plasticity (STDP), a form of Hebbian plasticity, is inherently stabilizing. Whether and how GABAergic inhibition influences STDP is not well understood. Using a model neuron driven by converging inputs modifiable by STDP, we determined that a sufficient level of inhibition was critical to ensure that temporal coherence (correlation among presynaptic spike times) of synaptic inputs, rather than initial strength or number of inputs within a pathway, controlled postsynaptic spike timing. Inhibition exerted this effect by preferentially reducing synaptic efficacy, the ability of inputs to evoke postsynaptic action potentials, of the less coherent inputs. In visual cortical slices, inhibition potently reduced synaptic efficacy at ages during but not before the critical period of ocular dominance (OD) plasticity. Whole-cell recordings revealed that the amplitude of unitary IPSCs from parvalbumin positive (Pv+) interneurons to pyramidal neurons increased during the critical period, while the synaptic decay time-constant decreased. In addition, intrinsic properties of Pv+ interneurons matured, resulting in an increase in instantaneous firing rate. Our results suggest that maturation of inhibition in visual cortex ensures that the temporally coherent inputs (e.g. those from the open eye during monocular deprivation) control postsynaptic spike times of binocular neurons, a prerequisite for Hebbian mechanisms to induce OD plasticity

    Inventing episodic memory : a theory of dorsal and ventral hippocampus

    Get PDF

    Role of an A-type K + conductance in the back-propagation of action potentials in the dendrites of hippocampal pyramidal neurons.

    Get PDF
    Abstract. Action potentials elicited in the axon actively back-propagate into the dendritic tree. During this process their amplitudes can be modulated by internal and external factors. We used a compartmental model of a hippocampal CA1 pyramidal neuron to illustrate how this modulation could depend on (1) the properties of an A-type K + conductance that is expressed at high density in hippocampal dendrites and (2) the relative timing of synaptic activation. The simulations suggest that the time relationship between pre-and postsynaptic activity could help regulate the amplitude of back-propagating action potentials, especially in the distal portion of the dendritic tree

    Constraining the function of CA1 in associative memory models of the hippocampus

    Get PDF
    Institute for Adaptive and Neural ComputationCA1 is the main source of afferents from the hippocampus, but the function of CA1 and its perforant path (PP) input remains unclear. In this thesis, Marr’s model of the hippocampus is used to investigate previously hypothesized functions, and also to investigate some of Marr’s unexplored theoretical ideas. The last part of the thesis explains the excitatory responses to PP activity in vivo, despite inhibitory responses in vitro. Quantitative support for the idea of CA1 as a relay of information from CA3 to the neocortex and subiculum is provided by constraining Marr’s model to experimental data. Using the same approach, the much smaller capacity of the PP input by comparison implies it is not a one-shot learning network. In turn, it is argued that the entorhinal-CA1 connections cannot operate as a short-term memory network through reverberating activity. The PP input to CA1 has been hypothesized to control the activity of CA1 pyramidal cells. Marr suggested an algorithm for self-organising the output activity during pattern storage. Analytic calculations show a greater capacity for self-organised patterns than random patterns for low connectivities and high loads, confirmed in simulations over a broader parameter range. This superior performance is maintained in the absence of complex thresholding mechanisms, normally required to maintain performance levels in the sparsely connected networks. These results provide computational motivation for CA3 to establish patterns of CA1 activity without involvement from the PP input. The recent report of CA1 place cell activity with CA3 lesioned (Brun et al., 2002. Science, 296(5576):2243-6) is investigated using an integrate-and-fire neuron model of the entorhinal-CA1 network. CA1 place field activity is learnt, despite a completely inhibitory response to the stimulation of entorhinal afferents. In the model, this is achieved using N-methyl-D-asparate receptors to mediate a significant proportion of the excitatory response. Place field learning occurs over a broad parameter space. It is proposed that differences between similar contexts are slowly learnt in the PP and as a result are amplified in CA1. This would provide improved spatial memory in similar but different contexts

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Computation in the high-conductance state

    Get PDF
    corecore