18,929 research outputs found
Evaluation of Neuromorphic Spike Encoding of Sound Using Information Theory
The problem of spike encoding of sound consists in transforming a sound
waveform into spikes. It is of interest in many domains, including the
development of audio-based spiking neural networks, where it is the first and
most crucial stage of processing. Many algorithms have been proposed to perform
spike encoding of sound. However, a systematic approach to quantitatively
evaluate their performance is currently lacking. We propose the use of an
information-theoretic framework to solve this problem. Specifically, we
evaluate the coding efficiency of four spike encoding algorithms on two coding
tasks that consist of coding the fundamental characteristics of sound:
frequency and amplitude. The algorithms investigated are: Independent Spike
Coding, Send-on-Delta coding, Ben's Spiker Algorithm, and Leaky
Integrate-and-Fire coding. Using the tools of information theory, we estimate
the information that the spikes carry on relevant aspects of an input stimulus.
We find disparities in the coding efficiencies of the algorithms, where Leaky
Integrate-and-Fire coding performs best. The information-theoretic analysis of
their performance on these coding tasks provides insight on the encoding of
richer and more complex sound stimuli.Comment: 10 pages, 7 figures, internal repor
Python for Information Theoretic Analysis of Neural Data
Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources
A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings
<p>Abstract</p> <p>Background</p> <p>Information theory is an increasingly popular framework for studying how the brain encodes sensory information. Despite its widespread use for the analysis of spike trains of single neurons and of small neural populations, its application to the analysis of other types of neurophysiological signals (EEGs, LFPs, BOLD) has remained relatively limited so far. This is due to the limited-sampling bias which affects calculation of information, to the complexity of the techniques to eliminate the bias, and to the lack of publicly available fast routines for the information analysis of multi-dimensional responses.</p> <p>Results</p> <p>Here we introduce a new C- and Matlab-based information theoretic toolbox, specifically developed for neuroscience data. This toolbox implements a novel computationally-optimized algorithm for estimating many of the main information theoretic quantities and bias correction techniques used in neuroscience applications. We illustrate and test the toolbox in several ways. First, we verify that these algorithms provide accurate and unbiased estimates of the information carried by analog brain signals (i.e. LFPs, EEGs, or BOLD) even when using limited amounts of experimental data. This test is important since existing algorithms were so far tested primarily on spike trains. Second, we apply the toolbox to the analysis of EEGs recorded from a subject watching natural movies, and we characterize the electrodes locations, frequencies and signal features carrying the most visual information. Third, we explain how the toolbox can be used to break down the information carried by different features of the neural signal into distinct components reflecting different ways in which correlations between parts of the neural signal contribute to coding. We illustrate this breakdown by analyzing LFPs recorded from primary visual cortex during presentation of naturalistic movies.</p> <p>Conclusion</p> <p>The new toolbox presented here implements fast and data-robust computations of the most relevant quantities used in information theoretic analysis of neural data. The toolbox can be easily used within Matlab, the environment used by most neuroscience laboratories for the acquisition, preprocessing and plotting of neural data. It can therefore significantly enlarge the domain of application of information theory to neuroscience, and lead to new discoveries about the neural code.</p
Simple, Efficient, and Neural Algorithms for Sparse Coding
Sparse coding is a basic task in many fields including signal processing,
neuroscience and machine learning where the goal is to learn a basis that
enables a sparse representation of a given set of data, if one exists. Its
standard formulation is as a non-convex optimization problem which is solved in
practice by heuristics based on alternating minimization. Re- cent work has
resulted in several algorithms for sparse coding with provable guarantees, but
somewhat surprisingly these are outperformed by the simple alternating
minimization heuristics. Here we give a general framework for understanding
alternating minimization which we leverage to analyze existing heuristics and
to design new ones also with provable guarantees. Some of these algorithms seem
implementable on simple neural architectures, which was the original motivation
of Olshausen and Field (1997a) in introducing sparse coding. We also give the
first efficient algorithm for sparse coding that works almost up to the
information theoretic limit for sparse recovery on incoherent dictionaries. All
previous algorithms that approached or surpassed this limit run in time
exponential in some natural parameter. Finally, our algorithms improve upon the
sample complexity of existing approaches. We believe that our analysis
framework will have applications in other settings where simple iterative
algorithms are used.Comment: 37 pages, 1 figur
Neural correlates of the use of prior knowledge in predictive coding
Every day, we use our sensory organs to perceive the environment around us. However, our perception not only depends on sensory information, but also on information already present in our brains, i.e. prior knowledge acquired by previous experience. The idea that prior knowledge is required for efficient perception goes back to Hermann von Helmholtz (1867). He raised the hypothesis that perception is a knowledge-driven inference process, in which prior knowledge allows to infer the (uncertain) causes of our sensory inputs. According to the currently very prominent âpredictive coding theoryâ (e. g. Rao and Ballard, 1999; Friston, 2005, 2010; Hawkins and Blakeslee, 2005; Clark, 2012; Hohwy, 2013) this inference process is realized in our brains by using prior knowledge to build internal predictions for incoming information.
Despite the increasing popularity of predictive coding theory in the last decade (see Clark, 2012 and comments to his article), previous research in the field has left out several important aspects: 1. The neural correlates of the use of prior knowledge are still widely unexplored; 2. Neurophysiological evidence for the neural implementation of predictive coding is limited and 3. Assumption-free approaches to study predictive coding mechanism are missing.
In the present work, I try to fill these gaps using three studies with magnetoencephalographic (MEG) recordings in human participants:
Study 1 (n = 48) investigates how prior knowledge from life-long experience influences perception. The results demonstrate that prediction errors induced by the violation of predictions based on life-long experience with faces are reflected in increased high-frequency gamma band activity (> 68 Hz).
For studies 2 and 3, neurophysiological analysis is combined with information-theoretic analysis methods. These allow investigating the neural correlates of predictive coding with only few prior assumptions. In particular, the information-theoretic measure active information storage (AIS; Lizier et al., 2012; Wibral et al., 2014) can quantify how much information is maintained in neural activity (predictable information). I use AIS in order to study the neural correlates of activated prior knowledge in study 2 and 3.
Study 2 (n = 52) assesses how prior knowledge is pre-activated in task relevant states to become usable for predictions. I find that pre-activation of prior knowledge for predictions about faces increases alpha and beta band related predictable information as measured by AIS in content specific brain areas.
Study 3 (n patients = 19; n controls = 19) explores whether predictive coding related mechanism are impaired in autism spectrum disorder (ASD). The results show that alpha and beta band related predictable information is reduced in the brain of ASD patients, in particular in the posterior part of the default mode network. These findings indicate reduced use or precision of prior knowledge in ASD.
In summary, the results presented in the present work illustrate the neural correlates of the use of prior knowledge in the predictive coding framework. They provide neurophysiological evidence for the link of prediction errors and fast neural activity (study 1, gamma band) as well as predictions and slower neural activity (study 2 and 3, alpha and beta band). These findings are in line with a theoretical proposal for the neural implementation of predictive coding theory (Bastos et al., 2012). Further, by application of AIS analysis (study 2 and 3) the present work introduces the largely assumption-free usage of information-theoretic measures to study the neural correlates of predictive coding in the human brain. In future, analysis of predictable information as measured by AIS may be applied to a broad variety of experiments studying predictive coding and also for research on neuropsychiatric disorders as has been demonstrated for ASD
Applications of Information Theory to Analysis of Neural Data
Information theory is a practical and theoretical framework developed for the
study of communication over noisy channels. Its probabilistic basis and
capacity to relate statistical structure to function make it ideally suited for
studying information flow in the nervous system. It has a number of useful
properties: it is a general measure sensitive to any relationship, not only
linear effects; it has meaningful units which in many cases allow direct
comparison between different experiments; and it can be used to study how much
information can be gained by observing neural responses in single trials,
rather than in averages over multiple trials. A variety of information
theoretic quantities are commonly used in neuroscience - (see entry
"Definitions of Information-Theoretic Quantities"). In this entry we review
some applications of information theory in neuroscience to study encoding of
information in both single neurons and neuronal populations.Comment: 8 pages, 2 figure
Partial Information Decomposition as a Unified Approach to the Specification of Neural Goal Functions
In many neural systems anatomical motifs are present repeatedly, but despite
their structural similarity they can serve very different tasks. A prime
example for such a motif is the canonical microcircuit of six-layered
neo-cortex, which is repeated across cortical areas, and is involved in a
number of different tasks (e.g.sensory, cognitive, or motor tasks). This
observation has spawned interest in finding a common underlying principle, a
'goal function', of information processing implemented in this structure. By
definition such a goal function, if universal, cannot be cast in
processing-domain specific language (e.g. 'edge filtering', 'working memory').
Thus, to formulate such a principle, we have to use a domain-independent
framework. Information theory offers such a framework. However, while the
classical framework of information theory focuses on the relation between one
input and one output (Shannon's mutual information), we argue that neural
information processing crucially depends on the combination of
\textit{multiple} inputs to create the output of a processor. To account for
this, we use a very recent extension of Shannon Information theory, called
partial information decomposition (PID). PID allows to quantify the information
that several inputs provide individually (unique information), redundantly
(shared information) or only jointly (synergistic information) about the
output. First, we review the framework of PID. Then we apply it to reevaluate
and analyze several earlier proposals of information theoretic neural goal
functions (predictive coding, infomax, coherent infomax, efficient coding). We
find that PID allows to compare these goal functions in a common framework, and
also provides a versatile approach to design new goal functions from first
principles. Building on this, we design and analyze a novel goal function,
called 'coding with synergy'. [...]Comment: 21 pages, 4 figures, appendi
Partial information decomposition as a unified approach to the specification of neural goal functions
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a âgoal functionâ, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. âedge filteringâ, âworking memoryâ). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannonâs mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called âcoding with synergyâ, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing
- âŠ