113 research outputs found
Understanding visual map formation through vortex dynamics of spin Hamiltonian models
The pattern formation in orientation and ocular dominance columns is one of
the most investigated problems in the brain. From a known cortical structure,
we build spin-like Hamiltonian models with long-range interactions of the
Mexican hat type. These Hamiltonian models allow a coherent interpretation of
the diverse phenomena in the visual map formation with the help of relaxation
dynamics of spin systems. In particular, we explain various phenomena of
self-organization in orientation and ocular dominance map formation including
the pinwheel annihilation and its dependency on the columnar wave vector and
boundary conditions.Comment: 4 pages, 15 figure
Correlations and functional connections in a population of grid cells
We study the statistics of spike trains of simultaneously recorded grid cells
in freely behaving rats. We evaluate pairwise correlations between these cells
and, using a generalized linear model (kinetic Ising model), study their
functional connectivity. Even when we account for the covariations in firing
rates due to overlapping fields, both the pairwise correlations and functional
connections decay as a function of the shortest distance between the vertices
of the spatial firing pattern of pairs of grid cells, i.e. their phase
difference. The functional connectivity takes positive values between cells
with nearby phases and approaches zero or negative values for larger phase
differences. We also find similar results when, in addition to correlations due
to overlapping fields, we account for correlations due to theta oscillations
and head directional inputs. The inferred connections between neurons can be
both negative and positive regardless of whether the cells share common spatial
firing characteristics, that is, whether they belong to the same modules, or
not. The mean strength of these inferred connections is close to zero, but the
strongest inferred connections are found between cells of the same module.
Taken together, our results suggest that grid cells in the same module do
indeed form a local network of interconnected neurons with a functional
connectivity that supports a role for attractor dynamics in the generation of
the grid pattern.Comment: Accepted for publication in PLoS Computational Biolog
Dynamic Control of Network Level Information Processing through Cholinergic Modulation
Acetylcholine (ACh) release is a prominent neurochemical marker of arousal state
within the brain. Changes in ACh are associated with changes in neural activity and
information processing, though its exact role and the mechanisms through which it
acts are unknown. Here I show that the dynamic changes in ACh levels that are
associated with arousal state control informational processing functions of networks
through its effects on the degree of Spike-Frequency Adaptation (SFA), an activity
dependent decrease in excitability, synchronizability, and neuronal resonance displayed
by single cells. Using numerical modeling I develop mechanistic explanations
for how control of these properties shift network activity from a stable high frequency
spiking pattern to a traveling wave of activity. This transition mimics the change
in brain dynamics seen between high ACh states, such as waking and Rapid Eye
Movement (REM) sleep, and low ACh states such as Non-REM (NREM) sleep. A
corresponding, and related, transition in network level memory recall is also occurs
as ACh modulates neuronal SFA. When ACh is at its highest levels (waking) all
memories are stably recalled, as ACh is decreased (REM) in the model weakly encoded
memories destabilize while strong memories remain stable. In levels of ACh
that match Slow Wave Sleep (SWS), no encoded memories are stably recalled. This
results from a competition between SFA and excitatory input strength and provides
a mechanism for neural networks to control the representation of underlying synaptic
information. Finally I show that during the low ACh conditions, oscillatory conditions
allow for external inputs to be properly stored in and recalled from synaptic weights. Taken together this work demonstrates that dynamic neuromodulation is
critical for the regulation of information processing tasks in neural networks. These
results suggest that ACh is capable of switching networks between two distinct information
processing modes. Rate coding of information is facilitated during high
ACh conditions and phase coding of information is facilitated during low ACh conditions.
Finally I propose that ACh levels control whether a network is in one of
three functional states: (High ACh; Active waking) optimized for encoding of new
information or the stable representation of relevant memories, (Mid ACh; resting
state or REM) optimized for encoding connections between currently stored memories
or searching the catalog of stored memories, and (Low ACh; NREM) optimized
for renormalization of synaptic strength and memory consolidation. This work provides
a mechanistic insight into the role of dynamic changes in ACh levels for the
encoding, consolidation, and maintenance of memories within the brain.PHDNeuroscienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147503/1/roachjp_1.pd
Self-learning Machines based on Hamiltonian Echo Backpropagation
A physical self-learning machine can be defined as a nonlinear dynamical system that can be trained on data (similar to artificial neural networks), but where the update of the internal degrees of freedom that serve as learnable parameters happens autonomously. In this way, neither external processing and feedback nor knowledge of (and control of) these internal degrees of freedom is required. We introduce a general scheme for self-learning in any time-reversible Hamiltonian system. We illustrate the training of such a self-learning machine numerically for the case of coupled nonlinear wave fields
Self-learning Machines based on Hamiltonian Echo Backpropagation
A physical self-learning machine can be defined as a nonlinear dynamical
system that can be trained on data (similar to artificial neural networks), but
where the update of the internal degrees of freedom that serve as learnable
parameters happens autonomously. In this way, neither external processing and
feedback nor knowledge of (and control of) these internal degrees of freedom is
required. We introduce a general scheme for self-learning in any
time-reversible Hamiltonian system. We illustrate the training of such a
self-learning machine numerically for the case of coupled nonlinear wave
fields
The spike-timing-dependent learning rule to encode spatiotemporal patterns in a network of spiking neurons
We study associative memory neural networks based on the Hodgkin-Huxley type
of spiking neurons. We introduce the spike-timing-dependent learning rule, in
which the time window with the negative part as well as the positive part is
used to describe the biologically plausible synaptic plasticity. The learning
rule is applied to encode a number of periodical spatiotemporal patterns, which
are successfully reproduced in the periodical firing pattern of spiking neurons
in the process of memory retrieval. The global inhibition is incorporated into
the model so as to induce the gamma oscillation. The occurrence of gamma
oscillation turns out to give appropriate spike timings for memory retrieval of
discrete type of spatiotemporal pattern. The theoretical analysis to elucidate
the stationary properties of perfect retrieval state is conducted in the limit
of an infinite number of neurons and shows the good agreement with the result
of numerical simulations. The result of this analysis indicates that the
presence of the negative and positive parts in the form of the time window
contributes to reduce the size of crosstalk term, implying that the time window
with the negative and positive parts is suitable to encode a number of
spatiotemporal patterns. We draw some phase diagrams, in which we find various
types of phase transitions with change of the intensity of global inhibition.Comment: Accepted for publication in Physical Review
Persistence in complex systems
Persistence is an important characteristic of many complex systems in nature, related to how long the system remains at a certain state before changing to a different one. The study of complex systems' persistence involves different definitions and uses different techniques, depending on whether short-term or long-term persistence is considered. In this paper we discuss the most important definitions, concepts, methods, literature and latest results on persistence in complex systems. Firstly, the most used definitions of persistence in short-term and long-term cases are presented. The most relevant methods to characterize persistence are then discussed in both cases. A complete literature review is also carried out. We also present and discuss some relevant results on persistence, and give empirical evidence of performance in different detailed case studies, for both short-term and long-term persistence. A perspective on the future of persistence concludes the work.This research has been partially supported by the project PID2020-115454GB-C21 of the Spanish Ministry of Science
and Innovation (MICINN). This research has also been partially supported by Comunidad de Madrid, PROMINT-CM
project (grant ref: P2018/EMT-4366). J. Del Ser would like to thank the Basque Government for its funding support
through the EMAITEK and ELKARTEK programs (3KIA project, KK-2020/00049), as well as the consolidated research group
MATHMODE (ref. T1294-19). GCV work is supported by the European Research Council (ERC) under the ERC-CoG-2014
SEDAL Consolidator grant (grant agreement 647423)
- …