11,369 research outputs found
To which extend is the "neural code" a metric ?
Here is proposed a review of the different choices to structure spike trains,
using deterministic metrics. Temporal constraints observed in biological or
computational spike trains are first taken into account. The relation with
existing neural codes (rate coding, rank coding, phase coding, ..) is then
discussed. To which extend the "neural code" contained in spike trains is
related to a metric appears to be a key point, a generalization of the
Victor-Purpura metric family being proposed for temporal constrained causal
spike trainsComment: 5 pages 5 figures Proceeding of the conference NeuroComp200
A control algorithm for autonomous optimization of extracellular recordings
This paper develops a control algorithm that can autonomously position an electrode so as to find and then maintain an optimal extracellular recording position. The algorithm was developed and tested in a two-neuron computational model representative of the cells found in cerebral cortex. The algorithm is based on a stochastic optimization of a suitably defined signal quality metric and is shown capable of finding the optimal recording position along representative sampling directions, as well as maintaining the optimal signal quality in the face of modeled tissue movements. The application of the algorithm to acute neurophysiological recording experiments and its potential implications to chronic recording electrode arrays are discussed
A comment on "A fast L_p spike alignment metric" by A. J. Dubbs, B. A. Seiler and M. O. Magnasco [arXiv:0907.3137]
Measuring the transmitted information in metric-based clustering has become
something of a standard test for the performance of a spike train metric. In
this comment, the recently proposed L_p Victor-Purpura metric is used to
cluster spiking responses to zebra finch songs, recorded from field L of
anesthetized zebra finch. It is found that for these data the L_p metrics with
p>1 modestly outperform the standard, p=1, Victor-Purpura metric. It is argued
that this is because for larger values of p, the metric comes closer to
performing windowed coincidence detection.Comment: 9 pages, 3 figures included as late
Emergence of slow-switching assemblies in structured neuronal networks
Unraveling the interplay between connectivity and spatio-temporal dynamics in
neuronal networks is a key step to advance our understanding of neuronal
information processing. Here we investigate how particular features of network
connectivity underpin the propensity of neural networks to generate
slow-switching assembly (SSA) dynamics, i.e., sustained epochs of increased
firing within assemblies of neurons which transition slowly between different
assemblies throughout the network. We show that the emergence of SSA activity
is linked to spectral properties of the asymmetric synaptic weight matrix. In
particular, the leading eigenvalues that dictate the slow dynamics exhibit a
gap with respect to the bulk of the spectrum, and the associated Schur vectors
exhibit a measure of block-localization on groups of neurons, thus resulting in
coherent dynamical activity on those groups. Through simple rate models, we
gain analytical understanding of the origin and importance of the spectral gap,
and use these insights to develop new network topologies with alternative
connectivity paradigms which also display SSA activity. Specifically, SSA
dynamics involving excitatory and inhibitory neurons can be achieved by
modifying the connectivity patterns between both types of neurons. We also show
that SSA activity can occur at multiple timescales reflecting a hierarchy in
the connectivity, and demonstrate the emergence of SSA in small-world like
networks. Our work provides a step towards understanding how network structure
(uncovered through advancements in neuroanatomy and connectomics) can impact on
spatio-temporal neural activity and constrain the resulting dynamics.Comment: The first two authors contributed equally -- 18 pages, including
supplementary material, 10 Figures + 2 SI Figure
SuperSpike: Supervised learning in multi-layer spiking neural networks
A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns
Neural activity classification with machine learning models trained on interspike interval series data
The flow of information through the brain is reflected by the activity
patterns of neural cells. Indeed, these firing patterns are widely used as
input data to predictive models that relate stimuli and animal behavior to the
activity of a population of neurons. However, relatively little attention was
paid to single neuron spike trains as predictors of cell or network properties
in the brain. In this work, we introduce an approach to neuronal spike train
data mining which enables effective classification and clustering of neuron
types and network activity states based on single-cell spiking patterns. This
approach is centered around applying state-of-the-art time series
classification/clustering methods to sequences of interspike intervals recorded
from single neurons. We demonstrate good performance of these methods in tasks
involving classification of neuron type (e.g. excitatory vs. inhibitory cells)
and/or neural circuit activity state (e.g. awake vs. REM sleep vs. nonREM sleep
states) on an open-access cortical spiking activity dataset
Optimization of miRNA-seq data preprocessing.
The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments
- …