1,098 research outputs found
Neural population coding: combining insights from microscopic and mass signals
Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior
Spatial processing of conspecific signals in weakly electric fish: from sensory image to neural population coding
In this dissertation, I examine how an animal’s nervous system encodes spatially realistic conspecific signals in their environment and how the encoding mechanisms support behavioral sensitivity. I begin by modeling changes in the electrosensory signals exchanged by weakly electric fish in a social context. During this behavior, I estimate how the spatial structure of conspecific stimuli influences sensory responses at the electroreceptive periphery. I then quantify how space is represented in the hindbrain, specifically in the primary sensory area called the electrosensory lateral line lobe. I show that behavioral sensitivity is influenced by the heterogeneous properties of the pyramidal cell population. I further demonstrate that this heterogeneity serves to start segregating spatial and temporal information early in the sensory pathway. Lastly, I characterize the accuracy of spatial coding in this network and predict the role of network elements, such as correlated noise and feedback, in shaping the spatial information. My research provides a comprehensive understanding of spatial coding in the first stages of sensory processing in this system and allows us to better understand how network dynamics shape coding accuracy
Neuromorphic Sensory Integration for Combining Sound Source Localization and Collision Avoidance
Animals combine various sensory cues with previously
acquired knowledge to safely travel towards a target
destination. In close analogy to biological systems, we propose a
neuromorphic system which decides, based on auditory and visual
input, how to reach a sound source without collisions. The development
of this sensory integration system, which identifies the
shortest possible path, is a key achievement towards autonomous
robotics. The proposed neuromorphic system comprises two event
based sensors (the eDVS for vision and the NAS for audition) and
the SpiNNaker processor. Open loop experiments were performed
to evaluate the system performances. In the presence of acoustic
stimulation alone, the heading direction points to the direction
of the sound source with a Pearson correlation coefficient of
0.89. When visual input is introduced into the network the
heading direction always points at the direction of null optical
flow closest to the sound source. Hence, the sensory integration
network is able to find the shortest path to the sound source
while avoiding obstacles. This work shows that a simple, task
dependent mapping of sensory information can lead to highly
complex and robust decisions.Ministerio de Economía y Competitividad TEC2016-77785-
A point process framework for modeling electrical stimulation of the auditory nerve
Model-based studies of auditory nerve responses to electrical stimulation can
provide insight into the functioning of cochlear implants. Ideally, these
studies can identify limitations in sound processing strategies and lead to
improved methods for providing sound information to cochlear implant users. To
accomplish this, models must accurately describe auditory nerve spiking while
avoiding excessive complexity that would preclude large-scale simulations of
populations of auditory nerve fibers and obscure insight into the mechanisms
that influence neural encoding of sound information. In this spirit, we develop
a point process model of the auditory nerve that provides a compact and
accurate description of neural responses to electric stimulation. Inspired by
the framework of generalized linear models, the proposed model consists of a
cascade of linear and nonlinear stages. We show how each of these stages can be
associated with biophysical mechanisms and related to models of neuronal
dynamics. Moreover, we derive a semi-analytical procedure that uniquely
determines each parameter in the model on the basis of fundamental statistics
from recordings of single fiber responses to electric stimulation, including
threshold, relative spread, jitter, and chronaxie. The model also accounts for
refractory and summation effects that influence the responses of auditory nerve
fibers to high pulse rate stimulation. Throughout, we compare model predictions
to published physiological data and explain differences in auditory nerve
responses to high and low pulse rate stimulation. We close by performing an
ideal observer analysis of simulated spike trains in response to sinusoidally
amplitude modulated stimuli and find that carrier pulse rate does not affect
modulation detection thresholds.Comment: 1 title page, 27 manuscript pages, 14 figures, 1 table, 1 appendi
J Neurosci
The brainstem auditory pathway is obligatory for all aural information. Brainstem auditory neurons must encode the level and timing of sounds, as well as their time-dependent spectral properties, the fine structure, and envelope, which are essential for sound discrimination. This study focused on envelope coding in the two cochlear nuclei of the barn owl, nucleus angularis (NA) and nucleus magnocellularis (NM). NA and NM receive input from bifurcating auditory nerve fibers and initiate processing pathways specialized in encoding interaural time (ITD) and level (ILD) differences, respectively. We found that NA neurons, although unable to accurately encode stimulus phase, lock more strongly to the stimulus envelope than NM units. The spectrotemporal receptive fields (STRFs) of NA neurons exhibit a pre-excitatory suppressive field. Using multilinear regression analysis and computational modeling, we show that this feature of STRFs can account for enhanced across-trial response reliability, by locking spikes to the stimulus envelope. Our findings indicate a dichotomy in envelope coding between the time and intensity processing pathways as early as at the level of the cochlear nuclei. This allows the ILD processing pathway to encode envelope information with greater fidelity than the ITD processing pathway. Furthermore, we demonstrate that the properties of the STRFs of the neurons can be quantitatively related to spike timing reliability.CD007690/CD/ODCDC CDC HHS/United StatesR01 DC007690/DC/NIDCD NIH HHS/United StatesR01 DC007690-06A1/DC/NIDCD NIH HHS/United StatesR01 DC007690-07/DC/NIDCD NIH HHS/United StatesT32 GM007288/GM/NIGMS NIH HHS/United States2011-09-02T00:00:00Z21368035PMC305980
Blindfold learning of an accurate neural metric
The brain has no direct access to physical stimuli, but only to the spiking
activity evoked in sensory organs. It is unclear how the brain can structure
its representation of the world based on differences between those noisy,
correlated responses alone. Here we show how to build a distance map of
responses from the structure of the population activity of retinal ganglion
cells, allowing for the accurate discrimination of distinct visual stimuli from
the retinal response. We introduce the Temporal Restricted Boltzmann Machine to
learn the spatiotemporal structure of the population activity, and use this
model to define a distance between spike trains. We show that this metric
outperforms existing neural distances at discriminating pairs of stimuli that
are barely distinguishable. The proposed method provides a generic and
biologically plausible way to learn to associate similar stimuli based on their
spiking responses, without any other knowledge of these stimuli
Processing Submillisecond Timing Differences in a Model Electrosensory System
Perception of sensory cues requires peripheral encoding followed by extraction of behaviorally relevant signal components by central neurons. Some sensory systems can detect temporal information with submillisecond accuracy, despite these signals occurring faster than the approximately 1 ms timescale of neuronal firing. In sound localization, the best studied example of this phenomenon, there are at least two distinct mechanisms for detecting submillisecond timing differences, indicating that multiple solutions to this fundamental problem exist. I investigated mechanisms for processing submillisecond timing differences by studying electrosensory processing in a time coding expert, mormyrid weakly electric fish, which can detect submillisecond differences in the duration of electric signals.
First, I measured responses of peripheral receptors to stimuli of different durations. I found that each unit responded preferentially to longer stimuli, but with response thresholds that varied among units within the behaviorally relevant range of durations. This variability establishes a population code operating at near threshold intensities in which the number and identity of responding receptors represents duration. At higher stimulus intensities all units respond independent of duration, rendering the population code obsolete. Importantly, peripheral receptors respond either to the start or end of a signal. Thus, stimulus duration is also represented by a temporal code, as a difference in spike times between receptors.
Next, I investigated the central mechanism for detection of submillisecond spike time differences by recording from time comparator neurons (Small Cells) in the midbrain. Recording from Small Cells is challenging because their somas are small and relatively inaccessible. I therefore designed a novel method using retrograde labeling to directly visualize and record from Small Cells in vivo. I showed that patterns of duration tuning vary among Small Cells due to a combination of blanking inhibition corresponding to one edge of a stimulus and variably delayed excitation corresponding to one or both edges of a stimulus. Other circuits that detect submillisecond timing differences rely either on precisely-timed inhibition or delay-line coincidence detection. I demonstrate a novel mechanism by which mormyrids combine delay-line coincidence detection with precisely-timed blanking inhibition to establish diverse patterns of duration tuning among a population of time comparators
Rate and timing of cortical responses driven by separate sensory channels
The sense of touch comprises multiple sensory channels that each conveys characteristic signals during interactions with objects. These neural signals must then be integrated in such a way that behaviorally relevant information about the objects is preserved. To understand the process of integration, we implement a simple computational model that describes how the responses of neurons in somatosensory cortex—recorded from awake, behaving monkeys—are shaped by the peripheral input, reconstructed using simulations of neuronal populations that reproduce natural spiking responses in the nerve with millisecond precision. First, we find that the strength of cortical responses is driven by one population of nerve fibers (rapidly adapting) whereas the timing of cortical responses is shaped by the other (Pacinian). Second, we show that input from these sensory channels is integrated in an optimal fashion that exploits the disparate response behaviors of different fiber types
Work-in-Progress: A Neuromorphic Approach of the Sound Source Localization Task in Real-Time Embedded Systems
Autonomous robots have become a very popular topic within the artificial
intelligence field. These systems are able to perform difficult
or risky tasks that could be dangerous when done by humans or
trained animals. Vision is commonly considered the most relevant
input sensor for autonomous robots and tracking systems. However,
auditory information is also important in some specific situations
where vision cannot provide any useful information when navigating.
In this work, a spike-based model of the medial superior olive
of the inner ear has been implemented in reconfigurable hardware
for performing sound source localization in real time. Future works
will focus on integrating this information with vision in order to
achieve a fully bio-inspired autonomous tracking system.Ministerio de Economía y Competitividad TEC2016-77785-
- …