4,222 research outputs found
Finding the event structure of neuronal spike trains
Contains fulltext :
91821.pdf (publisher's version ) (Closed access
The Local Field Potential Reflects Surplus Spike Synchrony
The oscillatory nature of the cortical local field potential (LFP) is
commonly interpreted as a reflection of synchronized network activity, but its
relationship to observed transient coincident firing of neurons on the
millisecond time-scale remains unclear. Here we present experimental evidence
to reconcile the notions of synchrony at the level of neuronal spiking and at
the mesoscopic scale. We demonstrate that only in time intervals of excess
spike synchrony, coincident spikes are better entrained to the LFP than
predicted by the locking of the individual spikes. This effect is enhanced in
periods of large LFP amplitudes. A quantitative model explains the LFP dynamics
by the orchestrated spiking activity in neuronal groups that contribute the
observed surplus synchrony. From the correlation analysis, we infer that
neurons participate in different constellations but contribute only a fraction
of their spikes to temporally precise spike configurations, suggesting a dual
coding scheme of rate and synchrony. This finding provides direct evidence for
the hypothesized relation that precise spike synchrony constitutes a major
temporally and spatially organized component of the LFP. Revealing that
transient spike synchronization correlates not only with behavior, but with a
mesoscopic brain signal corroborates its relevance in cortical processing.Comment: 45 pages, 8 figures, 3 supplemental figure
Information transmission in oscillatory neural activity
Periodic neural activity not locked to the stimulus or to motor responses is
usually ignored. Here, we present new tools for modeling and quantifying the
information transmission based on periodic neural activity that occurs with
quasi-random phase relative to the stimulus. We propose a model to reproduce
characteristic features of oscillatory spike trains, such as histograms of
inter-spike intervals and phase locking of spikes to an oscillatory influence.
The proposed model is based on an inhomogeneous Gamma process governed by a
density function that is a product of the usual stimulus-dependent rate and a
quasi-periodic function. Further, we present an analysis method generalizing
the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the
information content in such data. We demonstrate these tools on recordings from
relay cells in the lateral geniculate nucleus of the cat.Comment: 18 pages, 8 figures, to appear in Biological Cybernetic
A generative spike train model with time-structured higher order correlations
Emerging technologies are revealing the spiking activity in ever larger
neural ensembles. Frequently, this spiking is far from independent, with
correlations in the spike times of different cells. Understanding how such
correlations impact the dynamics and function of neural ensembles remains an
important open problem. Here we describe a new, generative model for correlated
spike trains that can exhibit many of the features observed in data. Extending
prior work in mathematical finance, this generalized thinning and shift (GTaS)
model creates marginally Poisson spike trains with diverse temporal correlation
structures. We give several examples which highlight the model's flexibility
and utility. For instance, we use it to examine how a neural network responds
to highly structured patterns of inputs. We then show that the GTaS model is
analytically tractable, and derive cumulant densities of all orders in terms of
model parameters. The GTaS framework can therefore be an important tool in the
experimental and theoretical exploration of neural dynamics
Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality
The mutual information between stimulus and spike-train response is commonly
used to monitor neural coding efficiency, but neuronal computation broadly
conceived requires more refined and targeted information measures of
input-output joint processes. A first step towards that larger goal is to
develop information measures for individual output processes, including
information generation (entropy rate), stored information (statistical
complexity), predictable information (excess entropy), and active information
accumulation (bound information rate). We calculate these for spike trains
generated by a variety of noise-driven integrate-and-fire neurons as a function
of time resolution and for alternating renewal processes. We show that their
time-resolution dependence reveals coarse-grained structural properties of
interspike interval statistics; e.g., -entropy rates that diverge less
quickly than the firing rate indicate interspike interval correlations. We also
find evidence that the excess entropy and regularized statistical complexity of
different types of integrate-and-fire neurons are universal in the
continuous-time limit in the sense that they do not depend on mechanism
details. This suggests a surprising simplicity in the spike trains generated by
these model neurons. Interestingly, neurons with gamma-distributed ISIs and
neurons whose spike trains are alternating renewal processes do not fall into
the same universality class. These results lead to two conclusions. First, the
dependence of information measures on time resolution reveals mechanistic
details about spike train generation. Second, information measures can be used
as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht
Fast, scalable, Bayesian spike identification for multi-electrode arrays
We present an algorithm to identify individual neural spikes observed on
high-density multi-electrode arrays (MEAs). Our method can distinguish large
numbers of distinct neural units, even when spikes overlap, and accounts for
intrinsic variability of spikes from each unit. As MEAs grow larger, it is
important to find spike-identification methods that are scalable, that is, the
computational cost of spike fitting should scale well with the number of units
observed. Our algorithm accomplishes this goal, and is fast, because it
exploits the spatial locality of each unit and the basic biophysics of
extracellular signal propagation. Human intervention is minimized and
streamlined via a graphical interface. We illustrate our method on data from a
mammalian retina preparation and document its performance on simulated data
consisting of spikes added to experimentally measured background noise. The
algorithm is highly accurate
- …