495 research outputs found

    Numerical Solution of Differential Equations by the Parker-Sochacki Method

    Get PDF
    A tutorial is presented which demonstrates the theory and usage of the Parker-Sochacki method of numerically solving systems of differential equations. Solutions are demonstrated for the case of projectile motion in air, and for the classical Newtonian N-body problem with mutual gravitational attraction.Comment: Added in July 2010: This tutorial has been posted since 1998 on a university web site, but has now been cited and praised in one or more refereed journals. I am therefore submitting it to the Cornell arXiv so that it may be read in response to its citations. See "Spiking neural network simulation: numerical integration with the Parker-Sochacki method:" J. Comput Neurosci, Robert D. Stewart & Wyeth Bair and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2717378

    Phase sensitive excitability of a limit cycle

    Get PDF
    The classical notion of excitability refers to an equilibrium state that shows under the influence of perturbations a nonlinear threshold-like behavior. Here, we extend this concept by demonstrating how periodic orbits can exhibit a specific form of excitable behavior where the nonlinear threshold-like response appears only after perturbations applied within a certain part of the periodic orbit, i.e the excitability happens to be phase sensitive. As a paradigmatic example of this concept we employ the classical FitzHugh-Nagumo system. The relaxation oscillations, appearing in the oscillatory regime of this system, turn out to exhibit a phase sensitive nonlinear threshold-like response to perturbations, which can be explained by the nonlinear behavior in the vicinity of the canard trajectory. Triggering the phase sensitive excitability of the relaxation oscillations by noise we find a characteristic non-monotone dependence of the mean spiking rate of the relaxation oscillation on the noise level. We explain this non-monotone dependence as a result of an interplay of two competing effects of the increasing noise: the growing efficiency of the excitation and the degradation of the nonlinear response

    Competition-based model of pheromone component ratio detection in the moth

    Get PDF
    For some moth species, especially those closely interrelated and sympatric, recognizing a specific pheromone component concentration ratio is essential for males to successfully locate conspecific females. We propose and determine the properties of a minimalist competition-based feed-forward neuronal model capable of detecting a certain ratio of pheromone components independently of overall concentration. This model represents an elementary recognition unit for the ratio of binary mixtures which we propose is entirely contained in the macroglomerular complex (MGC) of the male moth. A set of such units, along with projection neurons (PNs), can provide the input to higher brain centres. We found that (1) accuracy is mainly achieved by maintaining a certain ratio of connection strengths between olfactory receptor neurons (ORN) and local neurons (LN), much less by properties of the interconnections between the competing LNs proper. An exception to this rule is that it is beneficial if connections between generalist LNs (i.e. excited by either pheromone component) and specialist LNs (i.e. excited by one component only) have the same strength as the reciprocal specialist to generalist connections. (2) successful ratio recognition is achieved using latency-to-first-spike in the LN populations which, in contrast to expectations with a population rate code, leads to a broadening of responses for higher overall concentrations consistent with experimental observations. (3) when longer durations of the competition between LNs were observed it did not lead to higher recognition accuracy

    The role of inhibitory feedback for information processing in thalamocortical circuits

    Get PDF
    The information transfer in the thalamus is blocked dynamically during sleep, in conjunction with the occurence of spindle waves. As the theoretical understanding of the mechanism remains incomplete, we analyze two modeling approaches for a recent experiment by Le Masson {\sl et al}. on the thalamocortical loop. In a first step, we use a conductance-based neuron model to reproduce the experiment computationally. In a second step, we model the same system by using an extended Hindmarsh-Rose model, and compare the results with the conductance-based model. In the framework of both models, we investigate the influence of inhibitory feedback on the information transfer in a typical thalamocortical oscillator. We find that our extended Hindmarsh-Rose neuron model, which is computationally less costly and thus siutable for large-scale simulations, reproduces the experiment better than the conductance-based model. Further, in agreement with the experiment of Le Masson {\sl et al}., inhibitory feedback leads to stable self-sustained oscillations which mask the incoming input, and thereby reduce the information transfer significantly.Comment: 16 pages, 15eps figures included. To appear in Physical Review

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Network-State Modulation of Power-Law Frequency-Scaling in Visual Cortical Neurons

    Get PDF
    Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of Vm activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the Vm reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the “effective” connectivity responsible for the dynamical signature of the population signals measured at different integration levels, from Vm to LFP, EEG and fMRI

    A Neurocomputational Model of Stimulus-Specific Adaptation to Oddball and Markov Sequences

    Get PDF
    Stimulus-specific adaptation (SSA) occurs when the spike rate of a neuron decreases with repetitions of the same stimulus, but recovers when a different stimulus is presented. It has been suggested that SSA in single auditory neurons may provide information to change detection mechanisms evident at other scales (e.g., mismatch negativity in the event related potential), and participate in the control of attention and the formation of auditory streams. This article presents a spiking-neuron model that accounts for SSA in terms of the convergence of depressing synapses that convey feature-specific inputs. The model is anatomically plausible, comprising just a few homogeneously connected populations, and does not require organised feature maps. The model is calibrated to match the SSA measured in the cortex of the awake rat, as reported in one study. The effect of frequency separation, deviant probability, repetition rate and duration upon SSA are investigated. With the same parameter set, the model generates responses consistent with a wide range of published data obtained in other auditory regions using other stimulus configurations, such as block, sequential and random stimuli. A new stimulus paradigm is introduced, which generalises the oddball concept to Markov chains, allowing the experimenter to vary the tone probabilities and the rate of switching independently. The model predicts greater SSA for higher rates of switching. Finally, the issue of whether rarity or novelty elicits SSA is addressed by comparing the responses of the model to deviants in the context of a sequence of a single standard or many standards. The results support the view that synaptic adaptation alone can explain almost all aspects of SSA reported to date, including its purported novelty component, and that non-trivial networks of depressing synapses can intensify this novelty response
    • …
    corecore