174 research outputs found

    Phase locking below rate threshold in noisy model neurons

    Get PDF
    The property of a neuron to phase-lock to an oscillatory stimulus before adapting its spike rate to the stimulus frequency plays an important role for the auditory system. We investigate under which conditions neurons exhibit this phase locking below rate threshold. To this end, we simulate neurons employing the widely used leaky integrate-and-fire (LIF) model. Tuning parameters, we can arrange either an irregular spontaneous or a tonic spiking mode. When the neuron is stimulated in both modes, a significant rise of vector strength prior to a noticeable change of the spike rate can be observed. Combining analytic reasoning with numerical simulations, we trace this observation back to a modulation of interspike intervals, which itself requires spikes to be only loosely coupled. We test the limits of this conception by simulating an LIF model with threshold fatigue, which generates pronounced anticorrelations between subsequent interspike intervals. In addition we evaluate the LIF response for harmonic stimuli of various frequencies and discuss the extension to more complex stimuli. It seems that phase locking below rate threshold occurs generically for all zero mean stimuli. Finally, we discuss our findings in the context of stimulus detection

    Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis

    Full text link
    We show how the Equation-Free approach for multi-scale computations can be exploited to systematically study the dynamics of neural interactions on a random regular connected graph under a pairwise representation perspective. Using an individual-based microscopic simulator as a black box coarse-grained timestepper and with the aid of simulated annealing we compute the coarse-grained equilibrium bifurcation diagram and analyze the stability of the stationary states sidestepping the necessity of obtaining explicit closures at the macroscopic level. We also exploit the scheme to perform a rare-events analysis by estimating an effective Fokker-Planck describing the evolving probability density function of the corresponding coarse-grained observables

    Numerical Solution of Differential Equations by the Parker-Sochacki Method

    Get PDF
    A tutorial is presented which demonstrates the theory and usage of the Parker-Sochacki method of numerically solving systems of differential equations. Solutions are demonstrated for the case of projectile motion in air, and for the classical Newtonian N-body problem with mutual gravitational attraction.Comment: Added in July 2010: This tutorial has been posted since 1998 on a university web site, but has now been cited and praised in one or more refereed journals. I am therefore submitting it to the Cornell arXiv so that it may be read in response to its citations. See "Spiking neural network simulation: numerical integration with the Parker-Sochacki method:" J. Comput Neurosci, Robert D. Stewart & Wyeth Bair and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2717378

    Identification of linear and nonlinear sensory processing circuits from spiking neuron data

    Get PDF
    Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    A discrete time neural network model with spiking neurons II. Dynamics with noise

    Full text link
    We provide rigorous and exact results characterizing the statistics of spike trains in a network of leaky integrate and fire neurons, where time is discrete and where neurons are submitted to noise, without restriction on the synaptic weights. We show the existence and uniqueness of an invariant measure of Gibbs type and discuss its properties. We also discuss Markovian approximations and relate them to the approaches currently used in computational neuroscience to analyse experimental spike trains statistics.Comment: 43 pages - revised version - to appear il Journal of Mathematical Biolog

    A phenomenological model of the electrically stimulated auditory nerve fiber: temporal and biphasic response properties

    Get PDF
    We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model

    A view of Neural Networks as dynamical systems

    Full text link
    We consider neural networks from the point of view of dynamical systems theory. In this spirit we review recent results dealing with the following questions, adressed in the context of specific models. 1. Characterizing the collective dynamics; 2. Statistical analysis of spikes trains; 3. Interplay between dynamics and network structure; 4. Effects of synaptic plasticity.Comment: Review paper, 51 pages, 10 figures. submitte

    A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture

    Get PDF
    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components
    corecore