9,490 research outputs found
Dynamical Systems on Networks: A Tutorial
We give a tutorial for the study of dynamical systems on networks. We focus
especially on "simple" situations that are tractable analytically, because they
can be very insightful and provide useful springboards for the study of more
complicated scenarios. We briefly motivate why examining dynamical systems on
networks is interesting and important, and we then give several fascinating
examples and discuss some theoretical results. We also briefly discuss
dynamical systems on dynamical (i.e., time-dependent) networks, overview
software implementations, and give an outlook on the field.Comment: 39 pages, 1 figure, submitted, more examples and discussion than
original version, some reorganization and also more pointers to interesting
direction
Finite-size and correlation-induced effects in Mean-field Dynamics
The brain's activity is characterized by the interaction of a very large
number of neurons that are strongly affected by noise. However, signals often
arise at macroscopic scales integrating the effect of many neurons into a
reliable pattern of activity. In order to study such large neuronal assemblies,
one is often led to derive mean-field limits summarizing the effect of the
interaction of a large number of neurons into an effective signal. Classical
mean-field approaches consider the evolution of a deterministic variable, the
mean activity, thus neglecting the stochastic nature of neural behavior. In
this article, we build upon two recent approaches that include correlations and
higher order moments in mean-field equations, and study how these stochastic
effects influence the solutions of the mean-field equations, both in the limit
of an infinite number of neurons and for large yet finite networks. We
introduce a new model, the infinite model, which arises from both equations by
a rescaling of the variables and, which is invertible for finite-size networks,
and hence, provides equivalent equations to those previously derived models.
The study of this model allows us to understand qualitative behavior of such
large-scale networks. We show that, though the solutions of the deterministic
mean-field equation constitute uncorrelated solutions of the new mean-field
equations, the stability properties of limit cycles are modified by the
presence of correlations, and additional non-trivial behaviors including
periodic orbits appear when there were none in the mean field. The origin of
all these behaviors is then explored in finite-size networks where interesting
mesoscopic scale effects appear. This study leads us to show that the
infinite-size system appears as a singular limit of the network equations, and
for any finite network, the system will differ from the infinite system
Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model
The occurrence of sleep passed through the evolutionary sieve and is
widespread in animal species. Sleep is known to be beneficial to cognitive and
mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the
importance of the phenomenon, a complete understanding of its functions and
underlying mechanisms is still lacking. In this paper, we show interesting
effects of deep-sleep-like slow oscillation activity on a simplified
thalamo-cortical model which is trained to encode, retrieve and classify images
of handwritten digits. During slow oscillations,
spike-timing-dependent-plasticity (STDP) produces a differential homeostatic
process. It is characterized by both a specific unsupervised enhancement of
connections among groups of neurons associated to instances of the same class
(digit) and a simultaneous down-regulation of stronger synapses created by the
training. This hierarchical organization of post-sleep internal representations
favours higher performances in retrieval and classification tasks. The
mechanism is based on the interaction between top-down cortico-thalamic
predictions and bottom-up thalamo-cortical projections during deep-sleep-like
slow oscillations. Indeed, when learned patterns are replayed during sleep,
cortico-thalamo-cortical connections favour the activation of other neurons
coding for similar thalamic inputs, promoting their association. Such mechanism
hints at possible applications to artificial learning systems.Comment: 11 pages, 5 figures, v5 is the final version published on Scientific
Reports journa
Interference-Mitigating Waveform Design for Next-Generation Wireless Systems
A brief historical perspective of the evolution of waveform designs employed in consecutive generations of wireless communications systems is provided, highlighting the range of often conflicting demands on the various waveform characteristics. As the culmination of recent advances in the field the underlying benefits of various Multiple Input Multiple Output (MIMO) schemes are highlighted and exemplified. As an integral part of the appropriate waveform design, cognizance is given to the particular choice of the duplexing scheme used for supporting full-duplex communications and it is demonstrated that Time Division Duplexing (TDD) is substantially outperformed by Frequency Division Duplexing (FDD), unless the TDD scheme is combined with further sophisticated scheduling, MIMOs and/or adaptive modulation/coding. It is also argued that the specific choice of the Direct-Sequence (DS) spreading codes invoked in DS-CDMA predetermines the properties of the system. It is demonstrated that a specifically designed family of spreading codes exhibits a so-called interference-free window (IFW) and hence the resultant system is capable of outperforming its standardised counterpart employing classic Orthogonal Variable Spreading Factor (OVSF) codes under realistic dispersive channel conditions, provided that the interfering multi-user and multipath components arrive within this IFW. This condition may be ensured with the aid of quasisynchronous adaptive timing advance control. However, a limitation of the system is that the number of spreading codes exhibiting a certain IFW is limited, although this problem may be mitigated with the aid of novel code design principles, employing a combination of several spreading sequences in the time-frequency and spatial-domain. The paper is concluded by quantifying the achievable user load of a UTRA-like TDD Code Division Multiple Access (CDMA) system employing Loosely Synchronized (LS) spreading codes exhibiting an IFW in comparison to that of its counterpart using OVSF codes. Both system's performance is enhanced using beamforming MIMOs
Local ensemble transform Kalman filter, a fast non-stationary control law for adaptive optics on ELTs: theoretical aspects and first simulation results
We propose a new algorithm for an adaptive optics system control law, based
on the Linear Quadratic Gaussian approach and a Kalman Filter adaptation with
localizations. It allows to handle non-stationary behaviors, to obtain
performance close to the optimality defined with the residual phase variance
minimization criterion, and to reduce the computational burden with an
intrinsically parallel implementation on the Extremely Large Telescopes (ELTs).Comment: This paper was published in Optics Express and is made available as
an electronic reprint with the permission of OSA. The paper can be found at
the following URL on the OSA website: http://www.opticsinfobase.org/oe/ .
Systematic or multiple reproduction or distribution to multiple locations via
electronic or other means is prohibited and is subject to penalties under la
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
Chaos and correlated avalanches in excitatory neural networks with synaptic plasticity
A collective chaotic phase with power law scaling of activity events is
observed in a disordered mean field network of purely excitatory leaky
integrate-and-fire neurons with short-term synaptic plasticity. The dynamical
phase diagram exhibits two transitions from quasi-synchronous and asynchronous
regimes to the nontrivial, collective, bursty regime with avalanches. In the
homogeneous case without disorder, the system synchronizes and the bursty
behavior is reflected into a doubling-period transition to chaos for a two
dimensional discrete map. Numerical simulations show that the bursty chaotic
phase with avalanches exhibits a spontaneous emergence of time correlations and
enhanced Kolmogorov complexity. Our analysis reveals a mechanism for the
generation of irregular avalanches that emerges from the combination of
disorder and deterministic underlying chaotic dynamics.Comment: 5 pages 5 figures; SI 26 pages 14 figures. Improved editing, 3
subsections added in S
Degeneracy: a design principle for achieving robustness and evolvability
Robustness, the insensitivity of some of a biological system's
functionalities to a set of distinct conditions, is intimately linked to
fitness. Recent studies suggest that it may also play a vital role in enabling
the evolution of species. Increasing robustness, so is proposed, can lead to
the emergence of evolvability if evolution proceeds over a neutral network that
extends far throughout the fitness landscape. Here, we show that the design
principles used to achieve robustness dramatically influence whether robustness
leads to evolvability. In simulation experiments, we find that purely redundant
systems have remarkably low evolvability while degenerate, i.e. partially
redundant, systems tend to be orders of magnitude more evolvable. Surprisingly,
the magnitude of observed variation in evolvability can neither be explained by
differences in the size nor the topology of the neutral networks. This suggests
that degeneracy, a ubiquitous characteristic in biological systems, may be an
important enabler of natural evolution. More generally, our study provides
valuable new clues about the origin of innovations in complex adaptive systems.Comment: Accepted in the Journal of Theoretical Biology (Nov 2009
- …