68 research outputs found
ANALYSIS OF NEURAL ACTIVITY OF THE HUMAN BASAL GANGLIA IN DYSTONIA: A REVIEW
Deep brain stimulation of the globus pallidus internus is an efective symptomatic treatment for pharmacoresistant dystonic syndromes, where pathophysiological mechanisms of action are not yet fully understood. The aim of this review article is to provide an overview of the state-of-the-art approaches for processing of microelectrode recordings in dystonia; in order to define biomarkers to identify patients who will benefit from the clinical deep brain stimulation. For this purpose, the essential elements of microelectrode processing are examined. Next, we investigate a real example of spike sorting processing in this field. Herein, we describe baseline elements of microrecordings processing including data collection, preprocessing phase, features computation, spike detection and sorting and finally, advanced spike train data analysis. This study will help readers acquire the necessary information about these elements and their associated techniques. Thus, this study is supposed to assist during identification and proposal of interesting clinical hypotheses in the field of single unit neuronal recordings in dystonia
Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains
Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method
Spike train statistics and Gibbs distributions
This paper is based on a lecture given in the LACONEU summer school,
Valparaiso, January 2012. We introduce Gibbs distribution in a general setting,
including non stationary dynamics, and present then three examples of such
Gibbs distributions, in the context of neural networks spike train statistics:
(i) Maximum entropy model with spatio-temporal constraints; (ii) Generalized
Linear Models; (iii) Conductance based Inte- grate and Fire model with chemical
synapses and gap junctions.Comment: 23 pages, submitte
Feed-forward and recurrent inhibition for compressing and classifying high dynamic range biosignals in spiking neural network architectures
Neuromorphic processors that implement Spiking Neural Networks (SNNs) using
mixed-signal analog/digital circuits represent a promising technology for
closed-loop real-time processing of biosignals. As in biology, to minimize
power consumption, the silicon neurons' circuits are configured to fire with a
limited dynamic range and with maximum firing rates restricted to a few tens or
hundreds of Herz.
However, biosignals can have a very large dynamic range, so encoding them
into spikes without saturating the neuron outputs represents an open challenge.
In this work, we present a biologically-inspired strategy for compressing
this high-dynamic range in SNN architectures, using three adaptation mechanisms
ubiquitous in the brain: spike-frequency adaptation at the single neuron level,
feed-forward inhibitory connections from neurons belonging to the input layer,
and Excitatory-Inhibitory (E-I) balance via recurrent inhibition among neurons
in the output layer.
We apply this strategy to input biosignals encoded using both an asynchronous
delta modulation method and an energy-based pulse-frequency modulation method.
We validate this approach in silico, simulating a simple network applied to a
gesture classification task from surface EMG recordings.Comment: 5 pages, 7 figures, to be published in IEEE BioCAS 2023 Proceeding
Shifts in Coding Properties and Maintenance of Information Transmission during Adaptation in Barrel Cortex
Neuronal responses to ongoing stimulation in many systems change over time, or “adapt.” Despite the ubiquity of adaptation, its effects on the stimulus information carried by neurons are often unknown. Here we examine how adaptation affects sensory coding in barrel cortex. We used spike-triggered covariance analysis of single-neuron responses to continuous, rapidly varying vibrissa motion stimuli, recorded in anesthetized rats. Changes in stimulus statistics induced spike rate adaptation over hundreds of milliseconds. Vibrissa motion encoding changed with adaptation as follows. In every neuron that showed rate adaptation, the input–output tuning function scaled with the changes in stimulus distribution, allowing the neurons to maintain the quantity of information conveyed about stimulus features. A single neuron that did not show rate adaptation also lacked input–output rescaling and did not maintain information across changes in stimulus statistics. Therefore, in barrel cortex, rate adaptation occurs on a slow timescale relative to the features driving spikes and is associated with gain rescaling matched to the stimulus distribution. Our results suggest that adaptation enhances tactile representations in primary somatosensory cortex, where they could directly influence perceptual decisions
Synthesizing cognition in neuromorphic electronic systems
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina
Blindfold learning of an accurate neural metric
The brain has no direct access to physical stimuli, but only to the spiking
activity evoked in sensory organs. It is unclear how the brain can structure
its representation of the world based on differences between those noisy,
correlated responses alone. Here we show how to build a distance map of
responses from the structure of the population activity of retinal ganglion
cells, allowing for the accurate discrimination of distinct visual stimuli from
the retinal response. We introduce the Temporal Restricted Boltzmann Machine to
learn the spatiotemporal structure of the population activity, and use this
model to define a distance between spike trains. We show that this metric
outperforms existing neural distances at discriminating pairs of stimuli that
are barely distinguishable. The proposed method provides a generic and
biologically plausible way to learn to associate similar stimuli based on their
spiking responses, without any other knowledge of these stimuli
Neural activity classification with machine learning models trained on interspike interval series data
The flow of information through the brain is reflected by the activity
patterns of neural cells. Indeed, these firing patterns are widely used as
input data to predictive models that relate stimuli and animal behavior to the
activity of a population of neurons. However, relatively little attention was
paid to single neuron spike trains as predictors of cell or network properties
in the brain. In this work, we introduce an approach to neuronal spike train
data mining which enables effective classification and clustering of neuron
types and network activity states based on single-cell spiking patterns. This
approach is centered around applying state-of-the-art time series
classification/clustering methods to sequences of interspike intervals recorded
from single neurons. We demonstrate good performance of these methods in tasks
involving classification of neuron type (e.g. excitatory vs. inhibitory cells)
and/or neural circuit activity state (e.g. awake vs. REM sleep vs. nonREM sleep
states) on an open-access cortical spiking activity dataset
- …