34,268 research outputs found
Synaptic potentiation facilitates memory-like attractor dynamics in cultured in vitro hippocampal networks
Collective rhythmic dynamics from neurons is vital for cognitive functions
such as memory formation but how neurons self-organize to produce such activity
is not well understood. Attractor-based models have been successfully
implemented as a theoretical framework for memory storage in networks of
neurons. Activity-dependent modification of synaptic transmission is thought to
be the physiological basis of learning and memory. The goal of this study is to
demonstrate that using a pharmacological perturbation on in vitro networks of
hippocampal neurons that has been shown to increase synaptic strength follows
the dynamical postulates theorized by attractor models. We use a grid of
extracellular electrodes to study changes in network activity after this
perturbation and show that there is a persistent increase in overall spiking
and bursting activity after treatment. This increase in activity appears to
recruit more "errant" spikes into bursts. Lastly, phase plots indicate a
conserved activity pattern suggesting that the network is operating in a stable
dynamical state
Homeostatic plasticity and external input shape neural network dynamics
In vitro and in vivo spiking activity clearly differ. Whereas networks in
vitro develop strong bursts separated by periods of very little spiking
activity, in vivo cortical networks show continuous activity. This is puzzling
considering that both networks presumably share similar single-neuron dynamics
and plasticity rules. We propose that the defining difference between in vitro
and in vivo dynamics is the strength of external input. In vitro, networks are
virtually isolated, whereas in vivo every brain area receives continuous input.
We analyze a model of spiking neurons in which the input strength, mediated by
spike rate homeostasis, determines the characteristics of the dynamical state.
In more detail, our analytical and numerical results on various network
topologies show consistently that under increasing input, homeostatic
plasticity generates distinct dynamic states, from bursting, to
close-to-critical, reverberating and irregular states. This implies that the
dynamic state of a neural network is not fixed but can readily adapt to the
input strengths. Indeed, our results match experimental spike recordings in
vitro and in vivo: the in vitro bursting behavior is consistent with a state
generated by very low network input (< 0.1%), whereas in vivo activity suggests
that on the order of 1% recorded spikes are input-driven, resulting in
reverberating dynamics. Importantly, this predicts that one can abolish the
ubiquitous bursts of in vitro preparations, and instead impose dynamics
comparable to in vivo activity by exposing the system to weak long-term
stimulation, thereby opening new paths to establish an in vivo-like assay in
vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.
Perspective: network-guided pattern formation of neural dynamics
The understanding of neural activity patterns is fundamentally linked to an
understanding of how the brain's network architecture shapes dynamical
processes. Established approaches rely mostly on deviations of a given network
from certain classes of random graphs. Hypotheses about the supposed role of
prominent topological features (for instance, the roles of modularity, network
motifs, or hierarchical network organization) are derived from these
deviations. An alternative strategy could be to study deviations of network
architectures from regular graphs (rings, lattices) and consider the
implications of such deviations for self-organized dynamic patterns on the
network. Following this strategy, we draw on the theory of spatiotemporal
pattern formation and propose a novel perspective for analyzing dynamics on
networks, by evaluating how the self-organized dynamics are confined by network
architecture to a small set of permissible collective states. In particular, we
discuss the role of prominent topological features of brain connectivity, such
as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the
notion of network-guided pattern formation with numerical simulations and
outline how it can facilitate the understanding of neural dynamics
Songbird organotypic culture as an in vitro model for interrogating sparse sequencing networks
Sparse sequences of neuronal activity are fundamental features of neural circuit computation; however, the underlying homeostatic mechanisms remain poorly understood. To approach these questions, we have developed a method for cellular-resolution imaging in organotypic cultures of the adult zebra finch brain, including portions of the intact song circuit. These in vitro networks can survive for weeks, and display mature neuron morphologies. Neurons within the organotypic slices exhibit a diversity of spontaneous and pharmacologically induced activity that can be easily monitored using the genetically encoded calcium indicator GCaMP6. In this study, we primarily focus on the classic song sequence generator HVC and the surrounding areas. We describe proof of concept experiments including physiological, optical, and pharmacological manipulation of these exposed networks. This method may allow the cellular rules underlying sparse, stereotyped neural sequencing to be examined with new degrees of experimental control
Neutral theory and scale-free neural dynamics
Avalanches of electrochemical activity in brain networks have been
empirically reported to obey scale-invariant behavior --characterized by
power-law distributions up to some upper cut-off-- both in vitro and in vivo.
Elucidating whether such scaling laws stem from the underlying neural dynamics
operating at the edge of a phase transition is a fascinating possibility, as
systems poised at criticality have been argued to exhibit a number of important
functional advantages. Here we employ a well-known model for neural dynamics
with synaptic plasticity, to elucidate an alternative scenario in which
neuronal avalanches can coexist, overlapping in time, but still remaining
scale-free. Remarkably their scale-invariance does not stem from underlying
criticality nor self-organization at the edge of a continuous phase transition.
Instead, it emerges from the fact that perturbations to the system exhibit a
neutral drift --guided by demographic fluctuations-- with respect to endogenous
spontaneous activity. Such a neutral dynamics --similar to the one in neutral
theories of population genetics-- implies marginal propagation of activity,
characterized by power-law distributed causal avalanches. Importantly, our
results underline the importance of considering causal information --on which
neuron triggers the firing of which-- to properly estimate the statistics of
avalanches of neural activity. We discuss the implications of these findings
both in modeling and to elucidate experimental observations, as well as its
possible consequences for actual neural dynamics and information processing in
actual neural networks.Comment: Main text: 8 pages, 3 figures. Supplementary information: 5 pages, 4
figure
Emergent complex neural dynamics
A large repertoire of spatiotemporal activity patterns in the brain is the
basis for adaptive behaviour. Understanding the mechanism by which the brain's
hundred billion neurons and hundred trillion synapses manage to produce such a
range of cortical configurations in a flexible manner remains a fundamental
problem in neuroscience. One plausible solution is the involvement of universal
mechanisms of emergent complex phenomena evident in dynamical systems poised
near a critical point of a second-order phase transition. We review recent
theoretical and empirical results supporting the notion that the brain is
naturally poised near criticality, as well as its implications for better
understanding of the brain
Neural population coding: combining insights from microscopic and mass signals
Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior
- …