413 research outputs found
Autoregressive Point-Processes as Latent State-Space Models: a Moment-Closure Approach to Fluctuations and Autocorrelations
Modeling and interpreting spike train data is a task of central importance in
computational neuroscience, with significant translational implications. Two
popular classes of data-driven models for this task are autoregressive Point
Process Generalized Linear models (PPGLM) and latent State-Space models (SSM)
with point-process observations. In this letter, we derive a mathematical
connection between these two classes of models. By introducing an auxiliary
history process, we represent exactly a PPGLM in terms of a latent, infinite
dimensional dynamical system, which can then be mapped onto an SSM by basis
function projections and moment closure. This representation provides a new
perspective on widely used methods for modeling spike data, and also suggests
novel algorithmic approaches to fitting such models. We illustrate our results
on a phasic bursting neuron model, showing that our proposed approach provides
an accurate and efficient way to capture neural dynamics
Democrats and Republicans Can Be Differentiated from Their Faces
Background: Individuals â faces communicate a great deal of information about them. Although some of this information tends to be perceptually obvious (such as race and sex), much of it is perceptually ambiguous, without clear or obvious visual cues. Methodology/Principal Findings: Here we found that individuals â political affiliations could be accurately discerned from their faces. In Study 1, perceivers were able to accurately distinguish whether U.S. Senate candidates were either Democrats or Republicans based on photos of their faces. Study 2 showed that these effects extended to Democrat and Republican college students, based on their senior yearbook photos. Study 3 then showed that these judgments were related to differences in perceived traits among the Democrat and Republican faces. Republicans were perceived as more powerful than Democrats. Moreover, as individual targets were perceived to be more powerful, they were more likely to be perceived as Republicans by others. Similarly, as individual targets were perceived to be warmer, they were more likely to be perceived as Democrats. Conclusions/Significance: These data suggest that perceivers â beliefs about who is a Democrat and Republican may be based on perceptions of traits stereotypically associated with the two political parties and that, indeed, the guidance of these stereotypes may lead to categorizations of others â political affiliations at rates significantly more accurate than chanc
Recommended from our members
Causes and consequences of representational drift.
The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.This work is supported by the Human Frontier Science Program, ERC grant StG 716643 FLEXNEURO, and NIH grants (NS108410, NS089521, MH107620)
Nuclear-level Effective Theory of Muon-to-Electron Conversion: Formalism and Applications
New mu-to-e conversion searches aim to advance limits on charged lepton
flavor violation (CLFV) by four orders of magnitude. By considering P and CP
selection rules and the structure of possible charge and current densities, we
show that rates are governed by six nuclear responses. To generate a
microscopic formulation of these responses, we construct in non-relativistic
effective theory (NRET) the CLFV nucleon-level interaction, then embed it in a
nucleus. We discuss previous work, noting the lack of a systematic treatment of
the various small parameters.
Because the momentum transfer is comparable to the inverse nuclear size, a
full multipole expansion of the response functions is necessary, a daunting
task with Coulomb-distorted electron partial waves. We perform such an
expansion to high precision by introducing a simplifying local electron
momentum, treating the full set of 16 NRET operators. Previous work has been
limited to the simplest charge/spin operators, ignored Coulomb distortion (or
alternatively truncated the partial wave expansion) and the nucleon velocity
operator, which is responsible for three of the response functions. This
generates inconsistencies in the treatment of small parameters. We obtain a
"master formula" for mu-to-e conversion that properly treats all such effects
and those of the muon velocity. We compute muon-to-electron conversion rates
for a series of experimental targets, deriving bounds on the coefficients of
the CLFV operators.
We discuss the nuclear physics: two types of coherence enhance certain CLFV
operators and selection rules blind elastic mu-to-e conversion to others. We
discuss the matching of the NRET onto higher level EFTs, and the relation to
mu-to-e conversion to other CLFV tests. Finally we describe a publicly
available script that can be used to compute mu-to-e conversion rates in
nuclear targets.Comment: 50 pages, 10 figures; a few typos fixed in v
Variational log-Gaussian point-process methods for grid cells
We present practical solutions to applying Gaussianâprocess (GP) methods to calculate spatial statistics for grid cells in large environments. GPs are a data efficient approach to inferring neural tuning as a function of time, space, and other variables. We discuss how to design appropriate kernels for grid cells, and show that a variational Bayesian approach to logâGaussian Poisson models can be calculated quickly. This class of models has closedâform expressions for the evidence lowerâbound, and can be estimated rapidly for certain parameterizations of the posterior covariance. We provide an implementation that operates in a lowârank spatial frequency subspace for further acceleration, and demonstrate these methods on experimental data
Optimal Encoding in Stochastic Latent-Variable Models.
In this work we explore encoding strategies learned by statistical models of sensory coding in noisy spiking networks. Early stages of sensory communication in neural systems can be viewed as encoding channels in the information-theoretic sense. However, neural populations face constraints not commonly considered in communications theory. Using restricted Boltzmann machines as a model of sensory encoding, we find that networks with sufficient capacity learn to balance precision and noise-robustness in order to adaptively communicate stimuli with varying information content. Mirroring variability suppression observed in sensory systems, informative stimuli are encoded with high precision, at the cost of more variable responses to frequent, hence less informative stimuli. Curiously, we also find that statistical criticality in the neural population code emerges at model sizes where the input statistics are well captured. These phenomena have well-defined thermodynamic interpretations, and we discuss their connection to prevailing theories of coding and statistical criticality in neural populations
The information theory of developmental pruning: Optimizing global network architectures using local synaptic rules.
Funder: Studienstiftung des Deutschen Volkes; funder-id: http://dx.doi.org/10.13039/501100004350Funder: Bundesministerium fĂŒr Bildung und Forschung; funder-id: http://dx.doi.org/10.13039/501100002347Funder: Max-Planck-Gesellschaft; funder-id: http://dx.doi.org/10.13039/501100004189During development, biological neural networks produce more synapses and neurons than needed. Many of these synapses and neurons are later removed in a process known as neural pruning. Why networks should initially be over-populated, and the processes that determine which synapses and neurons are ultimately pruned, remains unclear. We study the mechanisms and significance of neural pruning in model neural networks. In a deep Boltzmann machine model of sensory encoding, we find that (1) synaptic pruning is necessary to learn efficient network architectures that retain computationally-relevant connections, (2) pruning by synaptic weight alone does not optimize network size and (3) pruning based on a locally-available measure of importance based on Fisher information allows the network to identify structurally important vs. unimportant connections and neurons. This locally-available measure of importance has a biological interpretation in terms of the correlations between presynaptic and postsynaptic neurons, and implies an efficient activity-driven pruning rule. Overall, we show how local activity-dependent synaptic pruning can solve the global problem of optimizing a network architecture. We relate these findings to biology as follows: (I) Synaptic over-production is necessary for activity-dependent connectivity optimization. (II) In networks that have more neurons than needed, cells compete for activity, and only the most important and selective neurons are retained. (III) Cells may also be pruned due to a loss of synapses on their axons. This occurs when the information they convey is not relevant to the target population
Recommended from our members
Stable task information from an unstable neural population
Over days and weeks, neural activity representing an animalâs position and movement in sensorimotor cortex has been found to continually reconfigure or âdriftâ during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories, which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. Analyzing long-term calcium imaging recordings from posterior parietal cortex in mice (Mus musculus), we show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioral variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days
- âŠ