253 research outputs found
Neuron dynamics in the presence of 1/f noise
Interest in understanding the interplay between noise and the response of a
non-linear device cuts across disciplinary boundaries. It is as relevant for
unmasking the dynamics of neurons in noisy environments as it is for designing
reliable nanoscale logic circuit elements and sensors. Most studies of noise in
non-linear devices are limited to either time-correlated noise with a
Lorentzian spectrum (of which the white noise is a limiting case) or just white
noise. We use analytical theory and numerical simulations to study the impact
of the more ubiquitous "natural" noise with a 1/f frequency spectrum.
Specifically, we study the impact of the 1/f noise on a leaky integrate and
fire model of a neuron. The impact of noise is considered on two quantities of
interest to neuron function: The spike count Fano factor and the speed of
neuron response to a small step-like stimulus. For the perfect (non-leaky)
integrate and fire model, we show that the Fano factor can be expressed as an
integral over noise spectrum weighted by a (low pass) filter function. This
result elucidates the connection between low frequency noise and disorder in
neuron dynamics. We compare our results to experimental data of single neurons
in vivo, and show how the 1/f noise model provides much better agreement than
the usual approximations based on Lorentzian noise. The low frequency noise,
however, complicates the case for information coding scheme based on interspike
intervals by introducing variability in the neuron response time. On a positive
note, the neuron response time to a step stimulus is, remarkably, nearly
optimal in the presence of 1/f noise. An explanation of this effect elucidates
how the brain can take advantage of noise to prime a subset of the neurons to
respond almost instantly to sudden stimuli.Comment: Phys. Rev. E in pres
Numerical Solution of Differential Equations by the Parker-Sochacki Method
A tutorial is presented which demonstrates the theory and usage of the
Parker-Sochacki method of numerically solving systems of differential
equations. Solutions are demonstrated for the case of projectile motion in air,
and for the classical Newtonian N-body problem with mutual gravitational
attraction.Comment: Added in July 2010: This tutorial has been posted since 1998 on a
university web site, but has now been cited and praised in one or more
refereed journals. I am therefore submitting it to the Cornell arXiv so that
it may be read in response to its citations. See "Spiking neural network
simulation: numerical integration with the Parker-Sochacki method:" J. Comput
Neurosci, Robert D. Stewart & Wyeth Bair and
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2717378
Dynamical principles in neuroscience
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA
Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis
We show how the Equation-Free approach for multi-scale computations can be
exploited to systematically study the dynamics of neural interactions on a
random regular connected graph under a pairwise representation perspective.
Using an individual-based microscopic simulator as a black box coarse-grained
timestepper and with the aid of simulated annealing we compute the
coarse-grained equilibrium bifurcation diagram and analyze the stability of the
stationary states sidestepping the necessity of obtaining explicit closures at
the macroscopic level. We also exploit the scheme to perform a rare-events
analysis by estimating an effective Fokker-Planck describing the evolving
probability density function of the corresponding coarse-grained observables
Identification of linear and nonlinear sensory processing circuits from spiking neuron data
Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms
Do brain networks evolve by maximizing their information flow capacity?
We propose a working hypothesis supported by numerical simulations that brain networks evolve based on the principle of the maximization of their internal information flow capacity. We find that synchronous behavior and capacity of information flow of the evolved networks reproduce well the same behaviors observed in the brain dynamical networks of Caenorhabditis elegans and humans, networks of Hindmarsh-Rose neurons with graphs given by these brain networks. We make a strong case to verify our hypothesis by showing that the neural networks with the closest graph distance to the brain networks of Caenorhabditis elegans and humans are the Hindmarsh-Rose neural networks evolved with coupling strengths that maximize information flow capacity. Surprisingly, we find that global neural synchronization levels decrease during brain evolution, reflecting on an underlying global no Hebbian-like evolution process, which is driven by no Hebbian-like learning behaviors for some of the clusters during evolution, and Hebbian-like learning rules for clusters where neurons increase their synchronization
On the simulation of nonlinear bidimensional spiking neuron models
Bidimensional spiking models currently gather a lot of attention for their
simplicity and their ability to reproduce various spiking patterns of cortical
neurons, and are particularly used for large network simulations. These models
describe the dynamics of the membrane potential by a nonlinear differential
equation that blows up in finite time, coupled to a second equation for
adaptation. Spikes are emitted when the membrane potential blows up or reaches
a cutoff value. The precise simulation of the spike times and of the adaptation
variable is critical for it governs the spike pattern produced, and is hard to
compute accurately because of the exploding nature of the system at the spike
times. We thoroughly study the precision of fixed time-step integration schemes
for this type of models and demonstrate that these methods produce systematic
errors that are unbounded, as the cutoff value is increased, in the evaluation
of the two crucial quantities: the spike time and the value of the adaptation
variable at this time. Precise evaluation of these quantities therefore involve
very small time steps and long simulation times. In order to achieve a fixed
absolute precision in a reasonable computational time, we propose here a new
algorithm to simulate these systems based on a variable integration step method
that either integrates the original ordinary differential equation or the
equation of the orbits in the phase plane, and compare this algorithm with
fixed time-step Euler scheme and other more accurate simulation algorithms
A simple self-organized swimmer driven by molecular motors
We investigate a self-organized swimmer at low Reynolds numbers. The
microscopic swimmer is composed of three spheres that are connected by two
identical active linker arms. Each linker arm contains molecular motors and
elastic elements and can oscillate spontaneously. We find that such a system
immersed in a viscous fluid can self-organize into a state of directed
swimming. The swimmer provides a simple system to study important aspects of
the swimming of micro-organisms.Comment: 6 pages, 4 figure
Analytical Integrate-and-Fire Neuron Models with Conductance-Based Dynamics for Event-Driven Simulation Strategies
Computational modeling with spiking neural networks
This chapter reviews recent developments in the area of spiking neural networks (SNN) and summarizes the main contributions to this research field. We give background information about the functioning of biological neurons, discuss the most important mathematical neural models along with neural encoding techniques, learning algorithms, and applications of spiking neurons. As a specific application, the functioning of the evolving spiking neural network (eSNN) classification method is presented in detail and the principles of numerous eSNN based applications are highlighted and discussed
- …
