63 research outputs found
The sign rule and beyond: Boundary effects, flexibility, and noise correlations in neural population codes
Over repeat presentations of the same stimulus, sensory neurons show variable
responses. This "noise" is typically correlated between pairs of cells, and a
question with rich history in neuroscience is how these noise correlations
impact the population's ability to encode the stimulus. Here, we consider a
very general setting for population coding, investigating how information
varies as a function of noise correlations, with all other aspects of the
problem - neural tuning curves, etc. - held fixed. This work yields unifying
insights into the role of noise correlations. These are summarized in the form
of theorems, and illustrated with numerical examples involving neurons with
diverse tuning curves. Our main contributions are as follows.
(1) We generalize previous results to prove a sign rule (SR) - if noise
correlations between pairs of neurons have opposite signs vs. their signal
correlations, then coding performance will improve compared to the independent
case. This holds for three different metrics of coding performance, and for
arbitrary tuning curves and levels of heterogeneity. This generality is true
for our other results as well.
(2) As also pointed out in the literature, the SR does not provide a
necessary condition for good coding. We show that a diverse set of correlation
structures can improve coding. Many of these violate the SR, as do
experimentally observed correlations. There is structure to this diversity: we
prove that the optimal correlation structures must lie on boundaries of the
possible set of noise correlations.
(3) We provide a novel set of necessary and sufficient conditions, under
which the coding performance (in the presence of noise) will be as good as it
would be if there were no noise present at all.Comment: 41 pages, 5 figure
Searching for modified growth patterns with tomographic surveys
In alternative theories of gravity, designed to produce cosmic acceleration
at the current epoch, the growth of large scale structure can be modified. We
study the potential of upcoming and future tomographic surveys such as DES and
LSST, with the aid of CMB and supernovae data, to detect departures from the
growth of cosmic structure expected within General Relativity. We employ
parametric forms to quantify the potential time- and scale-dependent variation
of the effective gravitational constant, and the differences between the two
Newtonian potentials. We then apply the Fisher matrix technique to forecast the
errors on the modified growth parameters from galaxy clustering, weak lensing,
CMB, and their cross-correlations across multiple photometric redshift bins. We
find that even with conservative assumptions about the data, DES will produce
non-trivial constraints on modified growth, and that LSST will do significantly
better.Comment: Matches the version accepted to PRD. New plots, typos fixed,
references added. The MGCAMB code is available at
http://www.sfu.ca/~gza5/MGCAMB.htm
A sparse coding model with synaptically local plasticity and spiking neurons can account for the diverse shapes of V1 simple cell receptive fields
Sparse coding algorithms trained on natural images can accurately predict the
features that excite visual cortical neurons, but it is not known whether such
codes can be learned using biologically realistic plasticity rules. We have
developed a biophysically motivated spiking network, relying solely on
synaptically local information, that can predict the full diversity of V1
simple cell receptive field shapes when trained on natural images. This
represents the first demonstration that sparse coding principles, operating
within the constraints imposed by cortical architecture, can successfully
reproduce these receptive fields. We further prove, mathematically, that
sparseness and decorrelation are the key ingredients that allow for
synaptically local plasticity rules to optimize a cooperative, linear
generative image model formed by the neural representation. Finally, we discuss
several interesting emergent properties of our network, with the intent of
bridging the gap between theoretical and experimental studies of visual cortex.Comment: 33 pages, 6 figures. To appear in PLoS Computational Biology. Some of
these data were presented by author JZ at the 2011 CoSyNe meeting in Salt
Lake Cit
Identifying Shared Decodable Concepts in the Human Brain Using Image-Language Foundation Models
We introduce a method that takes advantage of high-quality pretrained
multimodal representations to explore fine-grained semantic networks in the
human brain. Previous studies have documented evidence of functional
localization in the brain, with different anatomical regions preferentially
activating for different types of sensory input. Many such localized structures
are known, including the fusiform face area and parahippocampal place area.
This raises the question of whether additional brain regions (or conjunctions
of brain regions) are also specialized for other important semantic concepts.
To identify such brain regions, we developed a data-driven approach to uncover
visual concepts that are decodable from a massive functional magnetic resonance
imaging (fMRI) dataset. Our analysis is broadly split into three sections.
First, a fully connected neural network is trained to map brain responses to
the outputs of an image-language foundation model, CLIP (Radford et al., 2021).
Subsequently, a contrastive-learning dimensionality reduction method reveals
the brain-decodable components of CLIP space. In the final section of our
analysis, we localize shared decodable concepts in the brain using a
voxel-masking optimization method to produce a shared decodable concept (SDC)
space. The accuracy of our procedure is validated by comparing it to previous
localization experiments that identify regions for faces, bodies, and places.
In addition to these concepts, whose corresponding brain regions were already
known, we localize novel concept representations which are shared across
participants to other areas of the human brain. We also demonstrate how this
method can be used to inspect fine-grained semantic networks for individual
participants. We envisage that this extensible method can also be adapted to
explore other questions at the intersection of AI and neuroscience.Comment: Under revie
Biophysical neural adaptation mechanisms enable artificial neural networks to capture dynamic retinal computation
Adaptation is a universal aspect of neural systems that changes circuit computations to match prevailing inputs. These changes facilitate efficient encoding of sensory inputs while avoiding saturation. Conventional artificial neural networks (ANNs) have limited adaptive capabilities, hindering their ability to reliably predict neural output under dynamic input conditions. Can embedding neural adaptive mechanisms in ANNs improve their performance? To answer this question, we develop a new deep learning model of the retina that incorporates the biophysics of photoreceptor adaptation at the front-end of conventional convolutional neural networks (CNNs). These conventional CNNs build on 'Deep Retina,' a previously developed model of retinal ganglion cell (RGC) activity. CNNs that include this new photoreceptor layer outperform conventional CNN models at predicting male and female primate and rat RGC responses to naturalistic stimuli that include dynamic local intensity changes and large changes in the ambient illumination. These improved predictions result directly from adaptation within the phototransduction cascade. This research underscores the potential of embedding models of neural adaptation in ANNs and using them to determine how neural circuits manage the complexities of encoding natural inputs that are dynamic and span a large range of light levels
- …