2,355 research outputs found

    Quantification of the density of cooperative neighboring synapses required to evoke endocannabinoid signaling

    Get PDF
    The spatial pattern of synapse activation may impact on synaptic plasticity. This applies to the synaptically-evoked endocannabinoid-mediated short-term depression at the parallel fiber (PF) to Purkinje cell synapse, the occurrence of which requires close proximity between the activated synapses. Here, we determine quantitatively this required proximity, helped by the geometrical organization of the cerebellar molecular layer. Transgenic mice expressing a calcium indicator selectively in granule cells enabled the imaging of action potential-evoked presynaptic calcium rise in isolated, single PFs. This measurement was used to derive the number of PFs activated within a beam of PFs stimulated in the molecular layer, from which the density of activated PFs (input density) was calculated. This density was on average 2.8μm in sagittal slices and twice more in transverse slices. The synaptically-evoked endocannabinoid-mediated suppression of excitation (SSE) evoked by ten stimuli at 200Hz was determined from the monitoring of either postsynaptic responses or presynaptic calcium rise. The SSE was significantly larger when recorded in transverse slices, where the input density is larger. The exponential description of the SSE plotted as a function of the input density suggests that the SSE is half reduced when the input density decreases from 6 to 2μm. We conclude that, although all PFs are truncated in an acute sagittal slice, half of them remain respondent to stimulation, and activated synapses need to be closer than 1.5μm to synergize in endocannabinoid signaling. © 2013 The authors

    Detecting and Estimating Signals over Noisy and Unreliable Synapses: Information-Theoretic Analysis

    Get PDF
    The temporal precision with which neurons respond to synaptic inputs has a direct bearing on the nature of the neural code. A characterization of the neuronal noise sources associated with different sub-cellular components (synapse, dendrite, soma, axon, and so on) is needed to understand the relationship between noise and information transfer. Here we study the effect of the unreliable, probabilistic nature of synaptic transmission on information transfer in the absence of interaction among presynaptic inputs. We derive theoretical lower bounds on the capacity of a simple model of a cortical synapse under two different paradigms. In signal estimation, the signal is assumed to be encoded in the mean firing rate of the presynaptic neuron, and the objective is to estimate the continuous input signal from the postsynaptic voltage. In signal detection, the input is binary, and the presence or absence of a presynaptic action potential is to be detected from the postsynaptic voltage. The efficacy of information transfer in synaptic transmission is characterized by deriving optimal strategies under these two paradigms. On the basis of parameter values derived from neocortex, we find that single cortical synapses cannot transmit information reliably, but redundancy obtained using a small number of multiple synapses leads to a significant improvement in the information capacity of synaptic transmission

    Determining the neurotransmitter concentration profile at active synapses

    Get PDF
    Establishing the temporal and concentration profiles of neurotransmitters during synaptic release is an essential step towards understanding the basic properties of inter-neuronal communication in the central nervous system. A variety of ingenious attempts has been made to gain insights into this process, but the general inaccessibility of central synapses, intrinsic limitations of the techniques used, and natural variety of different synaptic environments have hindered a comprehensive description of this fundamental phenomenon. Here, we describe a number of experimental and theoretical findings that has been instrumental for advancing our knowledge of various features of neurotransmitter release, as well as newly developed tools that could overcome some limits of traditional pharmacological approaches and bring new impetus to the description of the complex mechanisms of synaptic transmission

    The Effect of synchronized inputs at the single neuron level

    Get PDF
    It is commonly assumed that temporal synchronization of excitatory synaptic inputs onto a single neuron increases its firing rate. We investigate here the role of synaptic synchronization for the leaky integrate-and-fire neuron as well as for a biophysically and anatomically detailed compartmental model of a cortical pyramidal cell. We find that if the number of excitatory inputs, N, is on the same order as the number of fully synchronized inputs necessary to trigger a single action potential, N_t, synchronization always increases the firing rate (for both constant and Poisson-distributed input). However, for large values of N compared to N_t, ''overcrowding'' occurs and temporal synchronization is detrimental to firing frequency. This behavior is caused by the conflicting influence of the low-pass nature of the passive dendritic membrane on the one hand and the refractory period on the other. If both temporal synchronization as well as the fraction of synchronized inputs (Murthy and Fetz 1993) is varied, synchronization is only advantageous if either N or the average input frequency, ƒ(in), are small enough

    Reinforcement learning in populations of spiking neurons

    Get PDF
    Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites

    Full text link
    This paper presents a spike-based model which employs neurons with functionally distinct dendritic compartments for classifying high dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron a capacity to perform a large number of input-output mappings. The model utilizes sparse synaptic connectivity; where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems and its performance is compared against that achieved using Support Vector Machine (SVM) and Extreme Learning Machine (ELM) techniques. Our proposed method attains comparable performance while utilizing 10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)
    corecore