1,405 research outputs found

    A numerical model for Hodgkin-Huxley neural stimulus reconstruction

    Get PDF
    The information about a neural activity is encoded in a neural response and usually the underlying stimulus that triggers the activity is unknown. This paper presents a numerical solution to reconstruct stimuli from Hodgkin-Huxley neural responses while retrieving the neural dynamics. The stimulus is reconstructed by first retrieving the maximal conductances of the ion channels and then solving the Hodgkin-Huxley equations for the stimulus. The results show that the reconstructed stimulus is a good approximation of the original stimulus, while the retrieved the neural dynamics, which represent the voltage-dependent changes in the ion channels, help to understand the changes in neural biochemistry. As high non-linearity of neural dynamics renders analytical inversion of a neuron an arduous task, a numerical approach provides a local solution to the problem of stimulus reconstruction and neural dynamics retrieval

    Mechanisms of Zero-Lag Synchronization in Cortical Motifs

    Get PDF
    Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of "dynamical relaying" - a mechanism that relies on a specific network motif - has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair - a "resonance pair" - plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.Comment: 41 pages, 12 figures, and 11 supplementary figure

    Channel noise induced stochastic facilitation in an auditory brainstem neuron model

    Full text link
    Neuronal membrane potentials fluctuate stochastically due to conductance changes caused by random transitions between the open and close states of ion channels. Although it has previously been shown that channel noise can nontrivially affect neuronal dynamics, it is unknown whether ion-channel noise is strong enough to act as a noise source for hypothesised noise-enhanced information processing in real neuronal systems, i.e. 'stochastic facilitation.' Here, we demonstrate that biophysical models of channel noise can give rise to two kinds of recently discovered stochastic facilitation effects in a Hodgkin-Huxley-like model of auditory brainstem neurons. The first, known as slope-based stochastic resonance (SBSR), enables phasic neurons to emit action potentials that can encode the slope of inputs that vary slowly relative to key time-constants in the model. The second, known as inverse stochastic resonance (ISR), occurs in tonically firing neurons when small levels of noise inhibit tonic firing and replace it with burst-like dynamics. Consistent with previous work, we conclude that channel noise can provide significant variability in firing dynamics, even for large numbers of channels. Moreover, our results show that possible associated computational benefits may occur due to channel noise in neurons of the auditory brainstem. This holds whether the firing dynamics in the model are phasic (SBSR can occur due to channel noise) or tonic (ISR can occur due to channel noise).Comment: Published by Physical Review E, November 2013 (this version 17 pages total - 10 text, 1 refs, 6 figures/tables); Associated matlab code is available online in the ModelDB repository at http://senselab.med.yale.edu/ModelDB/ShowModel.asp?model=15148

    Optimal Subharmonic Entrainment

    Full text link
    For many natural and engineered systems, a central function or design goal is the synchronization of one or more rhythmic or oscillating processes to an external forcing signal, which may be periodic on a different time-scale from the actuated process. Such subharmonic synchrony, which is dynamically established when N control cycles occur for every M cycles of a forced oscillator, is referred to as N:M entrainment. In many applications, entrainment must be established in an optimal manner, for example by minimizing control energy or the transient time to phase locking. We present a theory for deriving inputs that establish subharmonic N:M entrainment of general nonlinear oscillators, or of collections of rhythmic dynamical units, while optimizing such objectives. Ordinary differential equation models of oscillating systems are reduced to phase variable representations, each of which consists of a natural frequency and phase response curve. Formal averaging and the calculus of variations are then applied to such reduced models in order to derive optimal subharmonic entrainment waveforms. The optimal entrainment of a canonical model for a spiking neuron is used to illustrate this approach, which is readily extended to arbitrary oscillating systems

    The impact of spike timing variability on the signal-encoding performance of neural spiking models

    Get PDF
    It remains unclear whether the variability of neuronal spike trains in vivo arises due to biological noise sources or represents highly precise encoding of temporally varying synaptic input signals. Determining the variability of spike timing can provide fundamental insights into the nature of strategies used in the brain to represent and transmit information in the form of discrete spike trains. In this study, we employ a signal estimation paradigm to determine how variability in spike timing affects encoding of random time-varying signals. We assess this for two types of spiking models: an integrate-and-fire model with random threshold and a more biophysically realistic stochastic ion channel model. Using the coding fraction and mutual information as information-theoretic measures, we quantify the efficacy of optimal linear decoding of random inputs from the model outputs and study the relationship between efficacy and variability in the output spike train. Our findings suggest that variability does not necessarily hinder signal decoding for the biophysically plausible encoders examined and that the functional role of spiking variability depends intimately on the nature of the encoder and the signal processing task; variability can either enhance or impede decoding performance

    A unified approach to linking experimental, statistical and computational analysis of spike train data

    Get PDF
    A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data), but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach – linking statistical, computational, and experimental neuroscience – provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.Published versio

    Gain control network conditions in early sensory coding

    Get PDF
    Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate models and Hodgkin-Huxley conductance based models

    INCF Lithuanian Workshop on Neuroscience and Information Technology

    Get PDF
    The aim of this workshop was to give a current overview of neuroscience and informatics research in Lithuania, and to discuss the strategies for forming the Lithuanian Neuroinformatics Node and becoming a member of INCF. The workshop was organized by Dr. Aušra Saudargiene (Department of Informatics, Vytautas Magnus University, Kaunas, and Faculty of Natural Sciences, Vilnius University, Lithuania) and INCF.
The workshop was attended by 15 invited speakers, among them 4 guests and 11 Lithuanian neuroscientists, and over 20 participants. The workshop was organized into three main sessions: overview of the INCF activities including the Swedish and UK nodes of INCF; presentations on Neuroscience research carried out in Lithuania; discussion about the strategies for forming an INCF national node, and the benefits of having such a node in Lithuania (Appendix A: Program; Appendix B: Abstracts)
    • …
    corecore