7,142 research outputs found

    Consequences of converting graded to action potentials upon neural information coding and energy efficiency

    Get PDF
    Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ~50% in generator potentials, to ~3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation

    Learning intrinsic excitability in medium spiny neurons

    Full text link
    We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP) which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parametrization of individual ion channels on the neuronal activation function. We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal variability on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how variability and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP). The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic variability determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.Comment: 20 pages, 8 figure

    Thermal Impact on Spiking Properties in Hodgkin-Huxley Neuron with Synaptic Stimulus

    Full text link
    The effect of environmental temperature on neuronal spiking behaviors is investigated by numerically simulating the temperature dependence of spiking threshold of the Hodgkin-Huxley neuron subject to synaptic stimulus. We find that the spiking threshold exhibits a global minimum in a "comfortable temperature" range where spike initiation needs weakest synaptic strength, indicating the occurrence of optimal use of synaptic transmission in neural system. We further explore the biophysical origin of this phenomenon in ion channel gating kinetics and also discuss its possible biological relevance in information processing in neural systems.Comment: 10 pages, 4 figure

    Automated optimization of a reduced layer 5 pyramidal cell model based on experimental data.

    Get PDF
    The construction of compartmental models of neurons involves tuning a set of parameters to make the model neuron behave as realistically as possible. While the parameter space of single-compartment models or other simple models can be exhaustively searched, the introduction of dendritic geometry causes the number of parameters to balloon. As parameter tuning is a daunting and time-consuming task when performed manually, reliable methods for automatically optimizing compartmental models are desperately needed, as only optimized models can capture the behavior of real neurons. Here we present a three-step strategy to automatically build reduced models of layer 5 pyramidal neurons that closely reproduce experimental data. First, we reduce the pattern of dendritic branches of a detailed model to a set of equivalent primary dendrites. Second, the ion channel densities are estimated using a multi-objective optimization strategy to fit the voltage trace recorded under two conditions - with and without the apical dendrite occluded by pinching. Finally, we tune dendritic calcium channel parameters to model the initiation of dendritic calcium spikes and the coupling between soma and dendrite. More generally, this new method can be applied to construct families of models of different neuron types, with applications ranging from the study of information processing in single neurons to realistic simulations of large-scale network dynamics

    Deep Neural Networks - A Brief History

    Full text link
    Introduction to deep neural networks and their history.Comment: 14 pages, 14 figure

    Comparison of Langevin and Markov channel noise models for neuronal signal generation

    Full text link
    The stochastic opening and closing of voltage-gated ion channels produces noise in neurons. The effect of this noise on the neuronal performance has been modelled using either approximate or Langevin model, based on stochastic differential equations or an exact model, based on a Markov process model of channel gating. Yet whether the Langevin model accurately reproduces the channel noise produced by the Markov model remains unclear. Here we present a comparison between Langevin and Markov models of channel noise in neurons using single compartment Hodgkin-Huxley models containing either Na+Na^{+} and K+K^{+}, or only K+K^{+} voltage-gated ion channels. The performance of the Langevin and Markov models was quantified over a range of stimulus statistics, membrane areas and channel numbers. We find that in comparison to the Markov model, the Langevin model underestimates the noise contributed by voltage-gated ion channels, overestimating information rates for both spiking and non-spiking membranes. Even with increasing numbers of channels the difference between the two models persists. This suggests that the Langevin model may not be suitable for accurately simulating channel noise in neurons, even in simulations with large numbers of ion channels

    Synthesis of neural networks for spatio-temporal spike pattern recognition and processing

    Get PDF
    The advent of large scale neural computational platforms has highlighted the lack of algorithms for synthesis of neural structures to perform predefined cognitive tasks. The Neural Engineering Framework offers one such synthesis, but it is most effective for a spike rate representation of neural information, and it requires a large number of neurons to implement simple functions. We describe a neural network synthesis method that generates synaptic connectivity for neurons which process time-encoded neural signals, and which makes very sparse use of neurons. The method allows the user to specify, arbitrarily, neuronal characteristics such as axonal and dendritic delays, and synaptic transfer functions, and then solves for the optimal input-output relationship using computed dendritic weights. The method may be used for batch or online learning and has an extremely fast optimization process. We demonstrate its use in generating a network to recognize speech which is sparsely encoded as spike times.Comment: In submission to Frontiers in Neuromorphic Engineerin
    corecore