9,620 research outputs found
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
Neocortical neurons have thousands of excitatory synapses. It is a mystery
how neurons integrate the input from so many synapses and what kind of
large-scale network behavior this enables. It has been previously proposed that
non-linear properties of dendrites enable neurons to recognize multiple
patterns. In this paper we extend this idea by showing that a neuron with
several thousand synapses arranged along active dendrites can learn to
accurately and robustly recognize hundreds of unique patterns of cellular
activity, even in the presence of large amounts of noise and pattern variation.
We then propose a neuron model where some of the patterns recognized by a
neuron lead to action potentials and define the classic receptive field of the
neuron, whereas the majority of the patterns recognized by a neuron act as
predictions by slightly depolarizing the neuron without immediately generating
an action potential. We then present a network model based on neurons with
these properties and show that the network learns a robust model of time-based
sequences. Given the similarity of excitatory neurons throughout the neocortex
and the importance of sequence memory in inference and behavior, we propose
that this form of sequence memory is a universal property of neocortical
tissue. We further propose that cellular layers in the neocortex implement
variations of the same sequence memory algorithm to achieve different aspects
of inference and behavior. The neuron and network models we introduce are
robust over a wide range of parameters as long as the network uses a sparse
distributed code of cellular activations. The sequence capacity of the network
scales linearly with the number of synapses on each neuron. Thus neurons need
thousands of synapses to learn the many temporal patterns in sensory stimuli
and motor sequences.Comment: Submitted for publicatio
Statistical physics of neural systems with non-additive dendritic coupling
How neurons process their inputs crucially determines the dynamics of
biological and artificial neural networks. In such neural and neural-like
systems, synaptic input is typically considered to be merely transmitted
linearly or sublinearly by the dendritic compartments. Yet, single-neuron
experiments report pronounced supralinear dendritic summation of sufficiently
synchronous and spatially close-by inputs. Here, we provide a statistical
physics approach to study the impact of such non-additive dendritic processing
on single neuron responses and the performance of associative memory tasks in
artificial neural networks. First, we compute the effect of random input to a
neuron incorporating nonlinear dendrites. This approach is independent of the
details of the neuronal dynamics. Second, we use those results to study the
impact of dendritic nonlinearities on the network dynamics in a paradigmatic
model for associative memory, both numerically and analytically. We find that
dendritic nonlinearities maintain network convergence and increase the
robustness of memory performance against noise. Interestingly, an intermediate
number of dendritic branches is optimal for memory functionality
Hotspots of dendritic spine turnover facilitate clustered spine addition and learning and memory.
Modeling studies suggest that clustered structural plasticity of dendritic spines is an efficient mechanism of information storage in cortical circuits. However, why new clustered spines occur in specific locations and how their formation relates to learning and memory (L&M) remain unclear. Using in vivo two-photon microscopy, we track spine dynamics in retrosplenial cortex before, during, and after two forms of episodic-like learning and find that spine turnover before learning predicts future L&M performance, as well as the localization and rates of spine clustering. Consistent with the idea that these measures are causally related, a genetic manipulation that enhances spine turnover also enhances both L&M and spine clustering. Biophysically inspired modeling suggests turnover increases clustering, network sparsity, and memory capacity. These results support a hotspot model where spine turnover is the driver for localization of clustered spine formation, which serves to modulate network function, thus influencing storage capacity and L&M
Contributions of cortical feedback to sensory processing in primary visual cortex
Closing the structure-function divide is more challenging in the brain than in any other organ (Lichtman and Denk, 2011). For example, in early visual cortex, feedback projections to V1 can be quantified (e.g., Budd, 1998) but the understanding of feedback function is comparatively rudimentary (Muckli and Petro, 2013). Focusing on the function of feedback, we discuss how textbook descriptions mask the complexity of V1 responses, and how feedback and local activity reflects not only sensory processing but internal brain states
Reading out a spatiotemporal population code by imaging neighbouring parallel fibre axons in vivo.
The spatiotemporal pattern of synaptic inputs to the dendritic tree is crucial for synaptic integration and plasticity. However, it is not known if input patterns driven by sensory stimuli are structured or random. Here we investigate the spatial patterning of synaptic inputs by directly monitoring presynaptic activity in the intact mouse brain on the micron scale. Using in vivo calcium imaging of multiple neighbouring cerebellar parallel fibre axons, we find evidence for clustered patterns of axonal activity during sensory processing. The clustered parallel fibre input we observe is ideally suited for driving dendritic spikes, postsynaptic calcium signalling, and synaptic plasticity in downstream Purkinje cells, and is thus likely to be a major feature of cerebellar function during sensory processing
- …