1,188 research outputs found

    Regulation of Irregular Neuronal Firing by Autaptic Transmission

    Get PDF
    The importance of self-feedback autaptic transmission in modulating spike-time irregularity is still poorly understood. By using a biophysical model that incorporates autaptic coupling, we here show that self-innervation of neurons participates in the modulation of irregular neuronal firing, primarily by regulating the occurrence frequency of burst firing. In particular, we find that both excitatory and electrical autapses increase the occurrence of burst firing, thus reducing neuronal firing regularity. In contrast, inhibitory autapses suppress burst firing and therefore tend to improve the regularity of neuronal firing. Importantly, we show that these findings are independent of the firing properties of individual neurons, and as such can be observed for neurons operating in different modes. Our results provide an insightful mechanistic understanding of how different types of autapses shape irregular firing at the single-neuron level, and they highlight the functional importance of autaptic self-innervation in taming and modulating neurodynamics.Comment: 27 pages, 8 figure

    Nearly extensive sequential memory lifetime achieved by coupled nonlinear neurons

    Full text link
    Many cognitive processes rely on the ability of the brain to hold sequences of events in short-term memory. Recent studies have revealed that such memory can be read out from the transient dynamics of a network of neurons. However, the memory performance of such a network in buffering past information has only been rigorously estimated in networks of linear neurons. When signal gain is kept low, so that neurons operate primarily in the linear part of their response nonlinearity, the memory lifetime is bounded by the square root of the network size. In this work, I demonstrate that it is possible to achieve a memory lifetime almost proportional to the network size, "an extensive memory lifetime", when the nonlinearity of neurons is appropriately utilized. The analysis of neural activity revealed that nonlinear dynamics prevented the accumulation of noise by partially removing noise in each time step. With this error-correcting mechanism, I demonstrate that a memory lifetime of order N/logNN/\log N can be achieved.Comment: 21 pages, 5 figures, the manuscript has been accepted for publication in Neural Computatio

    The Spatial Structure of Stimuli Shapes the Timescale of Correlations in Population Spiking Activity

    Get PDF
    Throughout the central nervous system, the timescale over which pairs of neural spike trains are correlated is shaped by stimulus structure and behavioral context. Such shaping is thought to underlie important changes in the neural code, but the neural circuitry responsible is largely unknown. In this study, we investigate a stimulus-induced shaping of pairwise spike train correlations in the electrosensory system of weakly electric fish. Simultaneous single unit recordings of principal electrosensory cells show that an increase in the spatial extent of stimuli increases correlations at short (~10 ms) timescales while simultaneously reducing correlations at long (~100 ms) timescales. A spiking network model of the first two stages of electrosensory processing replicates this correlation shaping, under the assumptions that spatially broad stimuli both saturate feedforward afferent input and recruit an open-loop inhibitory feedback pathway. Our model predictions are experimentally verified using both the natural heterogeneity of the electrosensory system and pharmacological blockade of descending feedback projections. For weak stimuli, linear response analysis of the spiking network shows that the reduction of long timescale correlation for spatially broad stimuli is similar to correlation cancellation mechanisms previously suggested to be operative in mammalian cortex. The mechanism for correlation shaping supports population-level filtering of irrelevant distractor stimuli, thereby enhancing the population response to relevant prey and conspecific communication inputs. © 2012 Litwin-Kumar et al

    Towards a learning-theoretic analysis of spike-timing dependent plasticity

    Full text link
    This paper suggests a learning-theoretic perspective on how synaptic plasticity benefits global brain functioning. We introduce a model, the selectron, that (i) arises as the fast time constant limit of leaky integrate-and-fire neurons equipped with spiking timing dependent plasticity (STDP) and (ii) is amenable to theoretical analysis. We show that the selectron encodes reward estimates into spikes and that an error bound on spikes is controlled by a spiking margin and the sum of synaptic weights. Moreover, the efficacy of spikes (their usefulness to other reward maximizing selectrons) also depends on total synaptic strength. Finally, based on our analysis, we propose a regularized version of STDP, and show the regularization improves the robustness of neuronal learning when faced with multiple stimuli.Comment: To appear in Adv. Neural Inf. Proc. System

    Balanced neural architecture and the idling brain

    Get PDF
    A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in the spontaneous state and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models produce spike trains which lack the long timescale fluctuations and large variability exhibited during spontaneous cortical dynamics. We propose that global network architectures which support a large number of stable states (attractor networks) allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014). We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks. © 2014 Doiron and Litwin-Kumar

    Inferring neural circuit structure from datasets of heterogeneous tuning curves.

    Get PDF
    Tuning curves characterizing the response selectivities of biological neurons can exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or partially random connectivity also give rise to diverse tuning curves. Empirical tuning curve distributions can thus be utilized to make model-based inferences about the statistics of single-cell parameters and network connectivity. However, a general framework for such an inference or fitting procedure is lacking. We address this problem by proposing to view mechanistic network models as implicit generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable. Recent advances in machine learning provide ways for fitting implicit generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GANs) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic circuit models in theoretical neuroscience to datasets of tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, such as the biophysical parameters of, or synaptic connections between, particular recorded cells. Instead, it directly learns generalizable model parameters characterizing the network's statistical structure such as the statistics of strength and spatial range of connections between different cell types. Another strength of this approach is that it fits the joint high-dimensional distribution of tuning curves, instead of matching a few summary statistics picked a priori by the user, resulting in a more accurate inference of circuit properties. More generally, this framework opens the door to direct model-based inference of circuit structure from data beyond single-cell tuning curves, such as simultaneous population recordings
    corecore