2,537 research outputs found

    Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    Get PDF
    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight

    Detecting and Estimating Signals in Noisy Cable Structures, II: Information Theoretical Analysis

    Get PDF
    This is the second in a series of articles that seek to recast classical single-neuron biophysics in information-theoretical terms. Classical cable theory focuses on analyzing the voltage or current attenuation of a synaptic signal as it propagates from its dendritic input location to the spike initiation zone. On the other hand, we are interested in analyzing the amount of information lost about the signal in this process due to the presence of various noise sources distributed throughout the neuronal membrane. We use a stochastic version of the linear one-dimensional cable equation to derive closed-form expressions for the second-order moments of the fluctuations of the membrane potential associated with different membrane current noise sources: thermal noise, noise due to the random opening and closing of sodium and potassium channels, and noise due to the presence of “spontaneous” synaptic input. We consider two different scenarios. In the signal estimation paradigm, the time course of the membrane potential at a location on the cable is used to reconstruct the detailed time course of a random, band-limited current injected some distance away. Estimation performance is characterized in terms of the coding fraction and the mutual information. In the signal detection paradigm, the membrane potential is used to determine whether a distant synaptic event occurred within a given observation interval. In the light of our analytical results, we speculate that the length of weakly active apical dendrites might be limited by the information loss due to the accumulated noise between distal synaptic input sites and the soma and that the presence of dendritic nonlinearities probably serves to increase dendritic information transfer

    Detecting and Estimating Signals in Noisy Cable Structures, I: Neuronal Noise Sources

    Get PDF
    In recent theoretical approaches addressing the problem of neural coding, tools from statistical estimation and information theory have been applied to quantify the ability of neurons to transmit information through their spike outputs. These techniques, though fairly general, ignore the specific nature of neuronal processing in terms of its known biophysical properties. However, a systematic study of processing at various stages in a biophysically faithful model of a single neuron can identify the role of each stage in information transfer. Toward this end, we carry out a theoretical analysis of the information loss of a synaptic signal propagating along a linear, one-dimensional, weakly active cable due to neuronal noise sources along the way, using both a signal reconstruction and a signal detection paradigm. Here we begin such an analysis by quantitatively characterizing three sources of membrane noise: (1) thermal noise due to the passive membrane resistance, (2) noise due to stochastic openings and closings of voltage-gated membrane channels (Na^+ and K^+), and (3) noise due to random, background synaptic activity. Using analytical expressions for the power spectral densities of these noise sources, we compare their magnitudes in the case of a patch of membrane from a cortical pyramidal cell and explore their dependence on different biophysical parameters

    Control and Synchronization of Neuron Ensembles

    Full text link
    Synchronization of oscillations is a phenomenon prevalent in natural, social, and engineering systems. Controlling synchronization of oscillating systems is motivated by a wide range of applications from neurological treatment of Parkinson's disease to the design of neurocomputers. In this article, we study the control of an ensemble of uncoupled neuron oscillators described by phase models. We examine controllability of such a neuron ensemble for various phase models and, furthermore, study the related optimal control problems. In particular, by employing Pontryagin's maximum principle, we analytically derive optimal controls for spiking single- and two-neuron systems, and analyze the applicability of the latter to an ensemble system. Finally, we present a robust computational method for optimal control of spiking neurons based on pseudospectral approximations. The methodology developed here is universal to the control of general nonlinear phase oscillators.Comment: 29 pages, 6 figure

    Approximate Methods for State-Space Models

    Full text link
    State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This {\em Laplace-Gaussian filter} (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.Comment: 31 pages, 4 figures. Different pagination from journal version due to incompatible style files but same content; the supplemental file for the journal appears here as appendices B--E

    Synaptic Plasticity Can Produce and Enhance Direction Selectivity

    Get PDF
    The discrimination of the direction of movement of sensory images is critical to the control of many animal behaviors. We propose a parsimonious model of motion processing that generates direction selective responses using short-term synaptic depression and can reproduce salient features of direction selectivity found in a population of neurons in the midbrain of the weakly electric fish Eigenmannia virescens. The model achieves direction selectivity with an elementary Reichardt motion detector: information from spatially separated receptive fields converges onto a neuron via dynamically different pathways. In the model, these differences arise from convergence of information through distinct synapses that either exhibit or do not exhibit short-term synaptic depression—short-term depression produces phase-advances relative to nondepressing synapses. Short-term depression is modeled using two state-variables, a fast process with a time constant on the order of tens to hundreds of milliseconds, and a slow process with a time constant on the order of seconds to tens of seconds. These processes correspond to naturally occurring time constants observed at synapses that exhibit short-term depression. Inclusion of the fast process is sufficient for the generation of temporal disparities that are necessary for direction selectivity in the elementary Reichardt circuit. The addition of the slow process can enhance direction selectivity over time for stimuli that are sustained for periods of seconds or more. Transient (i.e., short-duration) stimuli do not evoke the slow process and therefore do not elicit enhanced direction selectivity. The addition of a sustained global, synchronous oscillation in the gamma frequency range can, however, drive the slow process and enhance direction selectivity to transient stimuli. This enhancement effect does not, however, occur for all combinations of model parameters. The ratio of depressing and nondepressing synapses determines the effects of the addition of the global synchronous oscillation on direction selectivity. These ingredients, short-term depression, spatial convergence, and gamma-band oscillations, are ubiquitous in sensory systems and may be used in Reichardt-style circuits for the generation and enhancement of a variety of biologically relevant spatiotemporal computations

    Phase resetting of collective rhythm in ensembles of oscillators

    Full text link
    Phase resetting curves characterize the way a system with a collective periodic behavior responds to perturbations. We consider globally coupled ensembles of Sakaguchi-Kuramoto oscillators, and use the Ott-Antonsen theory of ensemble evolution to derive the analytical phase resetting equations. We show the final phase reset value to be composed of two parts: an immediate phase reset directly caused by the perturbation, and the dynamical phase reset resulting from the relaxation of the perturbed system back to its dynamical equilibrium. Analytical, semi-analytical and numerical approximations of the final phase resetting curve are constructed. We support our findings with extensive numerical evidence involving identical and non-identical oscillators. The validity of our theory is discussed in the context of large ensembles approximating the thermodynamic limit.Comment: submitted to Phys. Rev.
    corecore