12,805 research outputs found
From Network Structure to Dynamics and Back Again: Relating dynamical stability and connection topology in biological complex systems
The recent discovery of universal principles underlying many complex networks
occurring across a wide range of length scales in the biological world has
spurred physicists in trying to understand such features using techniques from
statistical physics and non-linear dynamics. In this paper, we look at a few
examples of biological networks to see how similar questions can come up in
very different contexts. We review some of our recent work that looks at how
network structure (e.g., its connection topology) can dictate the nature of its
dynamics, and conversely, how dynamical considerations constrain the network
structure. We also see how networks occurring in nature can evolve to modular
configurations as a result of simultaneously trying to satisfy multiple
structural and dynamical constraints. The resulting optimal networks possess
hubs and have heterogeneous degree distribution similar to those seen in
biological systems.Comment: 15 pages, 6 figures, to appear in Proceedings of "Dynamics On and Of
Complex Networks", ECSS'07 Satellite Workshop, Dresden, Oct 1-5, 200
Transition to chaos in random neuronal networks
Firing patterns in the central nervous system often exhibit strong temporal
irregularity and heterogeneity in their time averaged response properties.
Previous studies suggested that these properties are outcome of an intrinsic
chaotic dynamics. Indeed, simplified rate-based large neuronal networks with
random synaptic connections are known to exhibit sharp transition from fixed
point to chaotic dynamics when the synaptic gain is increased. However, the
existence of a similar transition in neuronal circuit models with more
realistic architectures and firing dynamics has not been established.
In this work we investigate rate based dynamics of neuronal circuits composed
of several subpopulations and random connectivity. Nonzero connections are
either positive-for excitatory neurons, or negative for inhibitory ones, while
single neuron output is strictly positive; in line with known constraints in
many biological systems. Using Dynamic Mean Field Theory, we find the phase
diagram depicting the regimes of stable fixed point, unstable dynamic and
chaotic rate fluctuations. We characterize the properties of systems near the
chaotic transition and show that dilute excitatory-inhibitory architectures
exhibit the same onset to chaos as a network with Gaussian connectivity.
Interestingly, the critical properties near transition depend on the shape of
the single- neuron input-output transfer function near firing threshold.
Finally, we investigate network models with spiking dynamics. When synaptic
time constants are slow relative to the mean inverse firing rates, the network
undergoes a sharp transition from fast spiking fluctuations and static firing
rates to a state with slow chaotic rate fluctuations. When the synaptic time
constants are finite, the transition becomes smooth and obeys scaling
properties, similar to crossover phenomena in statistical mechanicsComment: 28 Pages, 12 Figures, 5 Appendice
Diffusion-driven instabilities and emerging spatial patterns in patchy landscapes
Spatial variation in population densities across a landscape is a feature of many ecological systems, from
self-organised patterns on mussel beds to spatially restricted insect outbreaks. It occurs as a result of
environmental variation in abiotic factors and/or biotic factors structuring the spatial distribution of
populations. However the ways in which abiotic and biotic factors interact to determine the existence
and nature of spatial patterns in population density remain poorly understood. Here we present a new
approach to studying this question by analysing a predator–prey patch-model in a heterogenous
landscape. We use analytical and numerical methods originally developed for studying nearest-
neighbour (juxtacrine) signalling in epithelia to explore whether and under which conditions patterns
emerge. We find that abiotic and biotic factors interact to promote pattern formation. In fact, we find a
rich and highly complex array of coexisting stable patterns, located within an enormous number of
unstable patterns. Our simulation results indicate that many of the stable patterns have appreciable
basins of attraction, making them significant in applications. We are able to identify mechanisms for
these patterns based on the classical ideas of long-range inhibition and short-range activation, whereby
landscape heterogeneity can modulate the spatial scales at which these processes operate to structure
the populations
Stability as a natural selection mechanism on interacting networks
Biological networks of interacting agents exhibit similar topological
properties for a wide range of scales, from cellular to ecological levels,
suggesting the existence of a common evolutionary origin. A general
evolutionary mechanism based on global stability has been proposed recently [J
I Perotti, O V Billoni, F A Tamarit, D R Chialvo, S A Cannas, Phys. Rev. Lett.
103, 108701 (2009)]. This mechanism is incorporated into a model of a growing
network of interacting agents in which each new agent's membership in the
network is determined by the agent's effect on the network's global stability.
We show that, out of this stability constraint, several topological properties
observed in biological networks emerge in a self organized manner. The
influence of the stability selection mechanism on the dynamics associated to
the resulting network is analyzed as well.Comment: 10 pages, 9 figure
Synchronization in complex networks
Synchronization processes in populations of locally interacting elements are
in the focus of intense research in physical, biological, chemical,
technological and social systems. The many efforts devoted to understand
synchronization phenomena in natural systems take now advantage of the recent
theory of complex networks. In this review, we report the advances in the
comprehension of synchronization phenomena when oscillating elements are
constrained to interact in a complex network topology. We also overview the new
emergent features coming out from the interplay between the structure and the
function of the underlying pattern of connections. Extensive numerical work as
well as analytical approaches to the problem are presented. Finally, we review
several applications of synchronization in complex networks to different
disciplines: biological systems and neuroscience, engineering and computer
science, and economy and social sciences.Comment: Final version published in Physics Reports. More information
available at http://synchronets.googlepages.com
Dynamics of Unperturbed and Noisy Generalized Boolean Networks
For years, we have been building models of gene regulatory networks, where
recent advances in molecular biology shed some light on new structural and
dynamical properties of such highly complex systems. In this work, we propose a
novel timing of updates in Random and Scale-Free Boolean Networks, inspired by
recent findings in molecular biology. This update sequence is neither fully
synchronous nor asynchronous, but rather takes into account the sequence in
which genes affect each other. We have used both Kauffman's original model and
Aldana's extension, which takes into account the structural properties about
known parts of actual GRNs, where the degree distribution is right-skewed and
long-tailed. The computer simulations of the dynamics of the new model compare
favorably to the original ones and show biologically plausible results both in
terms of attractors number and length. We have complemented this study with a
complete analysis of our systems' stability under transient perturbations,
which is one of biological networks defining attribute. Results are
encouraging, as our model shows comparable and usually even better behavior
than preceding ones without loosing Boolean networks attractive simplicity.Comment: 29 pages, publishe
Invariant template matching in systems with spatiotemporal coding: a vote for instability
We consider the design of a pattern recognition that matches templates to
images, both of which are spatially sampled and encoded as temporal sequences.
The image is subject to a combination of various perturbations. These include
ones that can be modeled as parameterized uncertainties such as image blur,
luminance, translation, and rotation as well as unmodeled ones. Biological and
neural systems require that these perturbations be processed through a minimal
number of channels by simple adaptation mechanisms. We found that the most
suitable mathematical framework to meet this requirement is that of weakly
attracting sets. This framework provides us with a normative and unifying
solution to the pattern recognition problem. We analyze the consequences of its
explicit implementation in neural systems. Several properties inherent to the
systems designed in accordance with our normative mathematical argument
coincide with known empirical facts. This is illustrated in mental rotation,
visual search and blur/intensity adaptation. We demonstrate how our results can
be applied to a range of practical problems in template matching and pattern
recognition.Comment: 52 pages, 12 figure
Synchronization and Noise: A Mechanism for Regularization in Neural Systems
To learn and reason in the presence of uncertainty, the brain must be capable
of imposing some form of regularization. Here we suggest, through theoretical
and computational arguments, that the combination of noise with synchronization
provides a plausible mechanism for regularization in the nervous system. The
functional role of regularization is considered in a general context in which
coupled computational systems receive inputs corrupted by correlated noise.
Noise on the inputs is shown to impose regularization, and when synchronization
upstream induces time-varying correlations across noise variables, the degree
of regularization can be calibrated over time. The proposed mechanism is
explored first in the context of a simple associative learning problem, and
then in the context of a hierarchical sensory coding task. The resulting
qualitative behavior coincides with experimental data from visual cortex.Comment: 32 pages, 7 figures. under revie
Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making
Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio
- …