18 research outputs found

    Characterizing synaptic conductance fluctuations in cortical neurons and their influence on spike generation

    Full text link
    Cortical neurons are subject to sustained and irregular synaptic activity which causes important fluctuations of the membrane potential (Vm). We review here different methods to characterize this activity and its impact on spike generation. The simplified, fluctuating point-conductance model of synaptic activity provides the starting point of a variety of methods for the analysis of intracellular Vm recordings. In this model, the synaptic excitatory and inhibitory conductances are described by Gaussian-distributed stochastic variables, or colored conductance noise. The matching of experimentally recorded Vm distributions to an invertible theoretical expression derived from the model allows the extraction of parameters characterizing the synaptic conductance distributions. This analysis can be complemented by the matching of experimental Vm power spectral densities (PSDs) to a theoretical template, even though the unexpected scaling properties of experimental PSDs limit the precision of this latter approach. Building on this stochastic characterization of synaptic activity, we also propose methods to qualitatively and quantitatively evaluate spike-triggered averages of synaptic time-courses preceding spikes. This analysis points to an essential role for synaptic conductance variance in determining spike times. The presented methods are evaluated using controlled conductance injection in cortical neurons in vitro with the dynamic-clamp technique. We review their applications to the analysis of in vivo intracellular recordings in cat association cortex, which suggest a predominant role for inhibition in determining both sub- and supra-threshold dynamics of cortical neurons embedded in active networks.Comment: 9 figures, Journal of Neuroscience Methods (in press, 2008

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    A numerical solver for a nonlinear Fokker-Planck equation representation of neuronal network dynamics

    Get PDF
    To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory

    Recurrence-mediated suprathreshold stochastic resonance

    Get PDF
    It has previously been shown that the encoding of time-dependent signals by feedforward networks (FFNs) of processing units exhibits suprathreshold stochastic resonance (SSR), which is an optimal signal transmission for a finite level of independent, individual stochasticity in the single units. In this study, a recurrent spiking network is simulated to demonstrate that SSR can be also caused by network noise in place of intrinsic noise. The level of autonomously generated fluctuations in the network can be controlled by the strength of synapses, and hence the coding fraction (our measure of information transmission) exhibits a maximum as a function of the synaptic coupling strength. The presence of a coding peak at an optimal coupling strength is robust over a wide range of individual, network, and signal parameters, although the optimal strength and peak magnitude depend on the parameter being varied. We also perform control experiments with an FFN illustrating that the optimized coding fraction is due to the change in noise level and not from other effects entailed when changing the coupling strength. These results also indicate that the non-white (temporally correlated) network noise in general provides an extra boost to encoding performance compared to the FFN driven by intrinsic white noise fluctuations.Deutsche ForschungsgemeinschaftHumboldt-Universität zu Berlin (1034)Peer Reviewe

    Specific Relationship Between the Shape of the Readiness Potential, Subjective Decision Time, and Waiting Time Predicted by an Accumulator Model with Temporally Autocorrelated Input Noise

    Get PDF
    Self-initiated movements are reliably preceded by a gradual buildup of neuronal activity known as the readiness potential (RP). Recent evidence suggests that the RP may reflect subthreshold stochastic fluctuations in neural activity that can be modeled as a process of accumulation to bound. One element of accumulator models that has been largely overlooked in the literature is the stochastic term, which is traditionally modeled as Gaussian white noise. While there may be practical reasons for this choice, we have long known that noise in neural systems is not white – it is long-term correlated with spectral density of the form 1/f^β (with roughly 1 \u3c β \u3c 3) across a broad range of spatial scales. I explored the behavior of a leaky stochastic accumulator when the noise over which it accumulates is temporally autocorrelated. I also allowed for the possibility that the RP, as measured at the scalp, might reflect the input to the accumulator (i.e., its stochastic noise component) rather than its output. These two premises led to two novel predictions that I empirically confirmed on behavioral and electroencephalography data from human subjects performing a self-initiated movement task. In addition to generating these two predictions, the model also suggested biologically plausible levels of autocorrelation, consistent with the degree of autocorrelation in our empirical data and in prior reports. These results expose new perspectives for accumulator models by suggesting that the spectral properties of the stochastic input should be allowed to vary, consistent with the nature of biological neural noise
    corecore