1,806 research outputs found

    The Spatial Structure of Stimuli Shapes the Timescale of Correlations in Population Spiking Activity

    Get PDF
    Throughout the central nervous system, the timescale over which pairs of neural spike trains are correlated is shaped by stimulus structure and behavioral context. Such shaping is thought to underlie important changes in the neural code, but the neural circuitry responsible is largely unknown. In this study, we investigate a stimulus-induced shaping of pairwise spike train correlations in the electrosensory system of weakly electric fish. Simultaneous single unit recordings of principal electrosensory cells show that an increase in the spatial extent of stimuli increases correlations at short (~10 ms) timescales while simultaneously reducing correlations at long (~100 ms) timescales. A spiking network model of the first two stages of electrosensory processing replicates this correlation shaping, under the assumptions that spatially broad stimuli both saturate feedforward afferent input and recruit an open-loop inhibitory feedback pathway. Our model predictions are experimentally verified using both the natural heterogeneity of the electrosensory system and pharmacological blockade of descending feedback projections. For weak stimuli, linear response analysis of the spiking network shows that the reduction of long timescale correlation for spatially broad stimuli is similar to correlation cancellation mechanisms previously suggested to be operative in mammalian cortex. The mechanism for correlation shaping supports population-level filtering of irrelevant distractor stimuli, thereby enhancing the population response to relevant prey and conspecific communication inputs. © 2012 Litwin-Kumar et al

    Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits

    Get PDF
    Understanding the computations performed by neuronal circuits requires characterizing the strength and dynamics of the connections between individual neurons. This characterization is typically achieved by measuring the correlation in the activity of two neurons. We have developed a new measure for studying connectivity in neuronal circuits based on information theory, the incremental mutual information (IMI). By conditioning out the temporal dependencies in the responses of individual neurons before measuring the dependency between them, IMI improves on standard correlation-based measures in several important ways: 1) it has the potential to disambiguate statistical dependencies that reflect the connection between neurons from those caused by other sources (e. g. shared inputs or intrinsic cellular or network mechanisms) provided that the dependencies have appropriate timescales, 2) for the study of early sensory systems, it does not require responses to repeated trials of identical stimulation, and 3) it does not assume that the connection between neurons is linear. We describe the theory and implementation of IMI in detail and demonstrate its utility on experimental recordings from the primate visual system

    Separating intrinsic interactions from extrinsic correlations in a network of sensory neurons

    Full text link
    Correlations in sensory neural networks have both extrinsic and intrinsic origins. Extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. Intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. Despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. In this paper we introduce a general strategy to infer {population models of interacting neurons that collectively encode stimulus information}. The key to disentangling intrinsic from extrinsic correlations is to infer the {couplings between neurons} separately from the encoding model, and to combine the two using corrections calculated in a mean-field approximation. We demonstrate the effectiveness of this approach on retinal recordings. The same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus-independent interactions between neurons. The inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus

    Spike-Train Responses of a Pair of Hodgkin-Huxley Neurons with Time-Delayed Couplings

    Full text link
    Model calculations have been performed on the spike-train response of a pair of Hodgkin-Huxley (HH) neurons coupled by recurrent excitatory-excitatory couplings with time delay. The coupled, excitable HH neurons are assumed to receive the two kinds of spike-train inputs: the transient input consisting of MM impulses for the finite duration (MM: integer) and the sequential input with the constant interspike interval (ISI). The distribution of the output ISI ToT_{\rm o} shows a rich of variety depending on the coupling strength and the time delay. The comparison is made between the dependence of the output ISI for the transient inputs and that for the sequential inputs.Comment: 19 pages, 4 figure

    How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation

    Get PDF
    This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials ("spike trains") produced by neuronal networks ? and; (ii) what are the effects of synaptic plasticity on these statistics ? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure

    The Mechanisms And Roles Of Feedback Loops For Visual Processing

    Get PDF
    Signal flow in the brain is not unidirectional; feedback represents a key element in neural signal processing. To address the question on how do neural feedback loops work in terms of synapses, microcircuitry, and systems dynamics, we developed a chick midbrain slice preparation to study and characterize one important feedback loop within the avian visual system: isthmotectal feedbackloop. The isthmotectal feedback loop consists of the optic tectum: OT) and three nucleus isthmi: Imc, Ipc and SLu. The tectal layer 10 neurons project to ipsilateral Imc, Ipc and SLu in a topographic way. In turn Ipc and SLu send back topographical: local) cholinergic terminals to the OT, whereas Imc sends non-topographical: global) GABAergic projections to the OT, and also to the Ipc and the SLu. We first study the cellular properties of Ipc neurons and found that almost all Ipc cells exhibited spontaneous activity characterized with a barrage of EPSPs and occasional spikes. Further experiments reveal the involvement of GABA in mediating the spontaneous synaptic inputs to the Ipc neurons. Next we investigate the mechanisms of oscillatory bursting in Ipc, which is observed in vivo, by building a model network based on the in vitro experimental results. Our simulation results conclude that strong feedforward excitation and spike-rate adaptation can generate oscillatory bursting in Ipc neuron in response to a constant input. Then we consider the effect of distributed synaptic delays measured within the isthmotectal feedback loop and elucidate that distributed delays can stabilize the system and lead to an increased range of parameters for which the system converges to a stable fixed point. Next we explore the functional features of GABAergic projection from Imc to Ipc and find that Imc has a regulatory role on actions of Ipc neurons in that stimulating Imc can evoke action potentials in Ipc neurons while it also can suppress the firing in Ipc neurons which is generated by somatic current injection. The mechanism of regulatory action is further studied by a two-compartment neuron model. Last, we lay out several open questions in this area which may worth further investigation

    The Interplay of Architecture and Correlated Variability in Neuronal Networks

    Get PDF
    This much is certain: neurons are coupled, and they exhibit covariations in their output. The extent of each does not have a single answer. Moreover, the strength of neuronal correlations, in particular, has been a subject of hot debate within the neuroscience community over the past decade, as advancing recording techniques have made available a lot of new, sometimes seemingly conflicting, datasets. The impact of connectivity and the resulting correlations on the ability of animals to perform necessary tasks is even less well understood. In order to answer relevant questions in these categories, novel approaches must be developed. This work focuses on three somewhat distinct, but inseparably coupled, crucial avenues of research within the broader field of computational neuroscience. First, there is a need for tools which can be applied, both by experimentalists and theorists, to understand how networks transform their inputs. In turn, these tools will allow neuroscientists to tease apart the structure which underlies network activity. The Generalized Thinning and Shift framework, presented in Chapter 4, addresses this need. Next, taking for granted a general understanding of network architecture as well as some grasp of the behavior of its individual units, we must be able to reverse the activity to structure relationship, and understand instead how network structure determines dynamics. We achieve this in Chapters 5 through 7 where we present an application of linear response theory yielding an explicit approximation of correlations in integrate--and--fire neuronal networks. This approximation reveals the explicit relationship between correlations, structure, and marginal dynamics. Finally, we must strive to understand the functional impact of network dynamics and architecture on the tasks that a neural network performs. This need motivates our analysis of a biophysically detailed model of the blow fly visual system in Chapter 8. Our hope is that the work presented here represents significant advances in multiple directions within the field of computational neuroscience.Mathematics, Department o
    corecore