1,181 research outputs found

    Data Assimilation using a GPU Accelerated Path Integral Monte Carlo Approach

    Full text link
    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a Graphics Processing Unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.Comment: 5 figures, submitted to Journal of Computational Physic

    A unified approach to linking experimental, statistical and computational analysis of spike train data

    Get PDF
    A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data), but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach – linking statistical, computational, and experimental neuroscience – provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.Published versio

    Computing with the Integrate and Fire Neuron: Weber's Law, Multiplication and Phase Detection

    Full text link
    The integrate and fire model (Stein, 1967) provides an analytically tractable formalism of neuronal firing rate in terms of a neuron's membrane time constant, threshold and refractory period. Integrate and fire (IAF) neurons have mainly been used to model physiologically realistic spike trains but little application of the IAF model appears to have been made in an explicitly computational context. In this paper we show that the transfer function of an IAF neuron provides, over a wide parameter range, a compressive nonlinearity sufficiently close to that of the logarithm so that IAF neurons can be used to multiply neural signals by mere addition of their outputs. Thus, although the IAF transfer function is not explicitly logarithmic, its compressive parameter regime supports a simple, single neuron model for multiplication. A simulation of the IAF multiplier shows that under a wide choice of parameters, the IAF neuron can multiply its inputs to within a 5% relative error. We also show that an IAF neuron under a different, yet biologically reasonable, parameter regime can have a quasi-linear transfer function, acting as an adder or a gain node. We then show an application in which the compressive transfer function of the IAF model provides a simple mechanism for phase-detection: multiplication of 40Hz phasic inputs followed by low-pass filtering yields an output that is a quasi-linear function of the relative phase of the inputs. This is a neural version of the heterodyne phase detection principle. Finally, we briefly discuss the precision and dynamic range of an IAF multiplier that is restricted to reasonable firing rates (in the range of 10-300 Hz) and reasonable computation time (in the range of 25-200 milliseconds).National Institute of Mental Health (5R01MH45969-04); Office of Naval Research (N00014-95-1-0409

    Detecting and Estimating Signals in Noisy Cable Structures, I: Neuronal Noise Sources

    Get PDF
    In recent theoretical approaches addressing the problem of neural coding, tools from statistical estimation and information theory have been applied to quantify the ability of neurons to transmit information through their spike outputs. These techniques, though fairly general, ignore the specific nature of neuronal processing in terms of its known biophysical properties. However, a systematic study of processing at various stages in a biophysically faithful model of a single neuron can identify the role of each stage in information transfer. Toward this end, we carry out a theoretical analysis of the information loss of a synaptic signal propagating along a linear, one-dimensional, weakly active cable due to neuronal noise sources along the way, using both a signal reconstruction and a signal detection paradigm. Here we begin such an analysis by quantitatively characterizing three sources of membrane noise: (1) thermal noise due to the passive membrane resistance, (2) noise due to stochastic openings and closings of voltage-gated membrane channels (Na^+ and K^+), and (3) noise due to random, background synaptic activity. Using analytical expressions for the power spectral densities of these noise sources, we compare their magnitudes in the case of a patch of membrane from a cortical pyramidal cell and explore their dependence on different biophysical parameters

    Adaptive Neural Coding Dependent on the Time-Varying Statistics of the Somatic Input Current

    Get PDF
    It is generally assumed that nerve cells optimize their performance to reflect the statistics of their input. Electronic circuit analogs of neurons require similar methods of self-optimization for stable and autonomous operation. We here describe and demonstrate a biologically plausible adaptive algorithm that enables a neuron to adapt the current threshold and the slope (or gain) of its current-frequency relationship to match the mean (or dc offset) and variance (or dynamic range or contrast) of the time-varying somatic input current. The adaptation algorithm estimates the somatic current signal from the spike train by way of the intracellular somatic calcium concentration, thereby continuously adjusting the neuronś firing dynamics. This principle is shown to work in an analog VLSI-designed silicon neuron

    Fast recursive filters for simulating nonlinear dynamic systems

    Get PDF
    A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities may depend on system variables, and the topology of the system may be complex, including feedback. Several examples taken from neuroscience are given: phototransduction, photopigment bleaching, and spike generation according to the Hodgkin-Huxley equations. The scheme uses two slightly different forms of autoregressive filters, with an implicit delay of zero for feedforward control and an implicit delay of half a sample distance for feedback control. On a fairly complex model of the macaque retinal horizontal cell it computes, for a given level of accuracy, 1-2 orders of magnitude faster than 4th-order Runge-Kutta. The computational scheme has minimal memory requirements, and is also suited for computation on a stream processor, such as a GPU (Graphical Processing Unit).Comment: 20 pages, 8 figures, 1 table. A comparison with 4th-order Runge-Kutta integration shows that the new algorithm is 1-2 orders of magnitude faster. The paper is in press now at Neural Computatio

    Estimation of synaptic conductances using control theory

    Get PDF
    Unveiling the input a neuron receives and distinguishing between excitation and inhibition provides information on local connectivity and brain operating conditions. However, experimental techniques do not allow direct recordings of synaptic inputs, and thus inverse methods are sought to retrieve synaptic currents/conductances from cell voltage. We propose a new method to estimate synaptic currents in neuron models using control theory (equivalent control and robust exact differentiation) and disentangle excitatory from inhibitory synaptic conductances
    corecore