333 research outputs found
The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction
Stimulus dimensionality-reduction methods in neuroscience seek to identify a
low-dimensional space of stimulus features that affect a neuron's probability
of spiking. One popular method, known as maximally informative dimensions
(MID), uses an information-theoretic quantity known as "single-spike
information" to identify this space. Here we examine MID from a model-based
perspective. We show that MID is a maximum-likelihood estimator for the
parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical
single-spike information corresponds to the normalized log-likelihood under a
Poisson model. This equivalence implies that MID does not necessarily find
maximally informative stimulus dimensions when spiking is not well described as
Poisson. We provide several examples to illustrate this shortcoming, and derive
a lower bound on the information lost when spiking is Bernoulli in discrete
time bins. To overcome this limitation, we introduce model-based dimensionality
reduction methods for neurons with non-Poisson firing statistics, and show that
they can be framed equivalently in likelihood-based or information-theoretic
terms. Finally, we show how to overcome practical limitations on the number of
stimulus dimensions that MID can estimate by constraining the form of the
non-parametric nonlinearity in an LNP model. We illustrate these methods with
simulations and data from primate visual cortex
Olfactory learning alters navigation strategies and behavioral variability in C. elegans
Animals adjust their behavioral response to sensory input adaptively
depending on past experiences. The flexible brain computation is crucial for
survival and is of great interest in neuroscience. The nematode C. elegans
modulates its navigation behavior depending on the association of odor butanone
with food (appetitive training) or starvation (aversive training), and will
then climb up the butanone gradient or ignore it, respectively. However, the
exact change in navigation strategy in response to learning is still unknown.
Here we study the learned odor navigation in worms by combining precise
experimental measurement and a novel descriptive model of navigation. Our model
consists of two known navigation strategies in worms: biased random walk and
weathervaning. We infer weights on these strategies by applying the model to
worm navigation trajectories and the exact odor concentration it experiences.
Compared to naive worms, appetitive trained worms up-regulate the biased random
walk strategy, and aversive trained worms down-regulate the weathervaning
strategy. The statistical model provides prediction with accuracy of
the past training condition given navigation data, which outperforms the
classical chemotaxis metric. We find that the behavioral variability is altered
by learning, such that worms are less variable after training compared to naive
ones. The model further predicts the learning-dependent response and
variability under optogenetic perturbation of the olfactory neuron
AWC. Lastly, we investigate neural circuits downstream from
AWC that are differentially recruited for learned odor-guided
navigation. Together, we provide a new paradigm to quantify flexible navigation
algorithms and pinpoint the underlying neural substrates
Correcting motion induced fluorescence artifacts in two-channel neural imaging
Imaging neural activity in a behaving animal presents unique challenges in
part because motion from an animal's movement creates artifacts in fluorescence
intensity time-series that are difficult to distinguish from neural signals of
interest. One approach to mitigating these artifacts is to image two channels;
one that captures an activity-dependent fluorophore, such as GCaMP, and another
that captures an activity-independent fluorophore such as RFP. Because the
activity-independent channel contains the same motion artifacts as the
activity-dependent channel, but no neural signals, the two together can be used
to remove the artifacts. Existing approaches for this correction, such as
taking the ratio of the two channels, do not account for channel independent
noise in the measured fluorescence. Moreover, no systematic comparison has been
made of existing approaches that use two-channel signals. Here, we present
Two-channel Motion Artifact Correction (TMAC), a method which seeks to remove
artifacts by specifying a generative model of the fluorescence of the two
channels as a function of motion artifact, neural activity, and noise. We
further present a novel method for evaluating ground-truth performance of
motion correction algorithms by comparing the decodability of behavior from two
types of neural recordings; a recording that had both an activity-dependent
fluorophore (GCaMP and RFP) and a recording where both fluorophores were
activity-independent (GFP and RFP). A successful motion-correction method
should decode behavior from the first type of recording, but not the second. We
use this metric to systematically compare five methods for removing motion
artifacts from fluorescent time traces. We decode locomotion from a GCaMP
expressing animal 15x more accurately on average than from control when using
TMAC inferred activity and outperform all other methods of motion correction
tested.Comment: 11 pages, 3 figure
Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits
Understanding the computations performed by neuronal circuits requires characterizing the strength and dynamics of the connections between individual neurons. This characterization is typically achieved by measuring the correlation in the activity of two neurons. We have developed a new measure for studying connectivity in neuronal circuits based on information theory, the incremental mutual information (IMI). By conditioning out the temporal dependencies in the responses of individual neurons before measuring the dependency between them, IMI improves on standard correlation-based measures in several important ways: 1) it has the potential to disambiguate statistical dependencies that reflect the connection between neurons from those caused by other sources (e. g. shared inputs or intrinsic cellular or network mechanisms) provided that the dependencies have appropriate timescales, 2) for the study of early sensory systems, it does not require responses to repeated trials of identical stimulation, and 3) it does not assume that the connection between neurons is linear. We describe the theory and implementation of IMI in detail and demonstrate its utility on experimental recordings from the primate visual system
A bio-inspired image coder with temporal scalability
We present a novel bio-inspired and dynamic coding scheme for static images.
Our coder aims at reproducing the main steps of the visual stimulus processing
in the mammalian retina taking into account its time behavior. The main novelty
of this work is to show how to exploit the time behavior of the retina cells to
ensure, in a simple way, scalability and bit allocation. To do so, our main
source of inspiration will be the biologically plausible retina model called
Virtual Retina. Following a similar structure, our model has two stages. The
first stage is an image transform which is performed by the outer layers in the
retina. Here it is modelled by filtering the image with a bank of difference of
Gaussians with time-delays. The second stage is a time-dependent
analog-to-digital conversion which is performed by the inner layers in the
retina. Thanks to its conception, our coder enables scalability and bit
allocation across time. Also, our decoded images do not show annoying artefacts
such as ringing and block effects. As a whole, this article shows how to
capture the main properties of a biological system, here the retina, in order
to design a new efficient coder.Comment: 12 pages; Advanced Concepts for Intelligent Vision Systems (ACIVS
2011
Size and emotion or depth and emotion? Evidence, using Matryoshka (Russian) dolls, of children using physical depth as a proxy for emotional charge
Background: The size and emotion effect is the tendency for children to draw people and other objects with a positive emotional charge larger than those with a negative or neutral charge. Here we explored the novel idea that drawing size might be acting as a proxy for depth (proximity).Methods: Forty-two children (aged 3-11 years) chose, from 2 sets of Matryoshka (Russian) dolls, a doll to represent a person with positive, negative or neutral charge, which they placed in front of themselves on a sheet of A3 paper. Results: We found that the children used proximity and doll size, to indicate emotional charge. Conclusions: These findings are consistent with the notion that in drawings, children are using size as a proxy for physical closeness (proximity), as they attempt with varying success to put positive charged items closer to, or negative and neutral charge items further away from, themselves
Receptive Field Inference with Localized Priors
The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets
Intrinsic gain modulation and adaptive neural coding
In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
Stimulus-dependent maximum entropy models of neural population codes
Neural populations encode information about their stimulus in a collective
fashion, by joint activity patterns of spiking and silence. A full account of
this mapping from stimulus to neural activity is given by the conditional
probability distribution over neural codewords given the sensory input. To be
able to infer a model for this distribution from large-scale neural recordings,
we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal
extension of the canonical linear-nonlinear model of a single neuron, to a
pairwise-coupled neural population. The model is able to capture the
single-cell response properties as well as the correlations in neural spiking
due to shared stimulus and due to effective neuron-to-neuron connections. Here
we show that in a population of 100 retinal ganglion cells in the salamander
retina responding to temporal white-noise stimuli, dependencies between cells
play an important encoding role. As a result, the SDME model gives a more
accurate account of single cell responses and in particular outperforms
uncoupled models in reproducing the distributions of codewords emitted in
response to a stimulus. We show how the SDME model, in conjunction with static
maximum entropy models of population vocabulary, can be used to estimate
information-theoretic quantities like surprise and information transmission in
a neural population.Comment: 11 pages, 7 figure
Identification of linear and nonlinear sensory processing circuits from spiking neuron data
Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms
- …