183,772 research outputs found
Modeling the dynamical interaction between epidemics on overlay networks
Epidemics seldom occur as isolated phenomena. Typically, two or more viral
agents spread within the same host population and may interact dynamically with
each other. We present a general model where two viral agents interact via an
immunity mechanism as they propagate simultaneously on two networks connecting
the same set of nodes. Exploiting a correspondence between the propagation
dynamics and a dynamical process performing progressive network generation, we
develop an analytic approach that accurately captures the dynamical interaction
between epidemics on overlay networks. The formalism allows for overlay
networks with arbitrary joint degree distribution and overlap. To illustrate
the versatility of our approach, we consider a hypothetical delayed
intervention scenario in which an immunizing agent is disseminated in a host
population to hinder the propagation of an undesirable agent (e.g. the spread
of preventive information in the context of an emerging infectious disease).Comment: Accepted for publication in Phys. Rev. E. 15 pages, 7 figure
Time-Polynomial Lieb-Robinson bounds for finite-range spin-network models
The Lieb-Robinson bound sets a theoretical upper limit on the speed at which
information can propagate in non-relativistic quantum spin networks. In its
original version, it results in an exponentially exploding function of the
evolution time, which is partially mitigated by an exponentially decreasing
term that instead depends upon the distance covered by the signal (the ratio
between the two exponents effectively defining an upper bound on the
propagation speed). In the present paper, by properly accounting for the free
parameters of the model, we show how to turn this construction into a stronger
inequality where the upper limit only scales polynomially with respect to the
evolution time. Our analysis applies to any chosen topology of the network, as
long as the range of the associated interaction is explicitly finite. For the
special case of linear spin networks we present also an alternative derivation
based on a perturbative expansion approach which improves the previous
inequality. In the same context we also establish a lower bound to the speed of
the information spread which yields a non trivial result at least in the limit
of small propagation times.Comment: 10 pages, 5 figure
Structural Properties of the Caenorhabditis elegans Neuronal Network
Despite recent interest in reconstructing neuronal networks, complete wiring
diagrams on the level of individual synapses remain scarce and the insights
into function they can provide remain unclear. Even for Caenorhabditis elegans,
whose neuronal network is relatively small and stereotypical from animal to
animal, published wiring diagrams are neither accurate nor complete and
self-consistent. Using materials from White et al. and new electron micrographs
we assemble whole, self-consistent gap junction and chemical synapse networks
of hermaphrodite C. elegans. We propose a method to visualize the wiring
diagram, which reflects network signal flow. We calculate statistical and
topological properties of the network, such as degree distributions, synaptic
multiplicities, and small-world properties, that help in understanding network
signal propagation. We identify neurons that may play central roles in
information processing and network motifs that could serve as functional
modules of the network. We explore propagation of neuronal activity in response
to sensory or artificial stimulation using linear systems theory and find
several activity patterns that could serve as substrates of previously
described behaviors. Finally, we analyze the interaction between the gap
junction and the chemical synapse networks. Since several statistical
properties of the C. elegans network, such as multiplicity and motif
distributions are similar to those found in mammalian neocortex, they likely
point to general principles of neuronal networks. The wiring diagram reported
here can help in understanding the mechanistic basis of behavior by generating
predictions about future experiments involving genetic perturbations, laser
ablations, or monitoring propagation of neuronal activity in response to
stimulation
Information Flow in Interaction Networks
Interaction networks, consisting of agents linked by their interactions, are
ubiquitous across many disciplines of modern science. Many methods of analysis
of interaction networks have been proposed, mainly concentrating on node degree
distribution or aiming to discover clusters of agents that are very strongly
connected between themselves. These methods are principally based on
graph-theory or machine learning.
We present a mathematically simple formalism for modelling context-specific
information propagation in interaction networks based on random walks. The
context is provided by selection of sources and destinations of information and
by use of potential functions that direct the flow towards the destinations. We
also use the concept of dissipation to model the aging of information as it
diffuses from its source.
Using examples from yeast protein-protein interaction networks and some of
the histone acetyltransferases involved in control of transcription, we
demonstrate the utility of the concepts and the mathematical constructs
introduced in this paper.Comment: 30 pages, 5 figures. This paper was published in 2007 in Journal of
Computational Biology. The version posted here does not include post
peer-review change
Dynamical and bursty interactions in social networks
We present a modeling framework for dynamical and bursty contact networks
made of agents in social interaction. We consider agents' behavior at short
time scales, in which the contact network is formed by disconnected cliques of
different sizes. At each time a random agent can make a transition from being
isolated to being part of a group, or vice-versa. Different distributions of
contact times and inter-contact times between individuals are obtained by
considering transition probabilities with memory effects, i.e. the transition
probabilities for each agent depend both on its state (isolated or interacting)
and on the time elapsed since the last change of state. The model lends itself
to analytical and numerical investigations. The modeling framework can be
easily extended, and paves the way for systematic investigations of dynamical
processes occurring on rapidly evolving dynamical networks, such as the
propagation of an information, or spreading of diseases
Disease and information spreading at different speeds in multiplex networks
Nowadays, one of the challenges we face when carrying out modeling of epidemic spreading is to develop methods to control disease transmission. In this article we study how the spreading of knowledge of a disease affects the propagation of that disease in a population of interacting individuals. For that, we analyze the interaction between two different processes on multiplex networks: the propagation of an epidemic using the susceptible-infected-susceptible dynamics and the dissemination of information about the disease—and its prevention methods—using the unaware-aware-unaware dynamics, so that informed individuals are less likely to be infected. Unlike previous related models where disease and information spread at the same time scale, we introduce here a parameter that controls the relative speed between the propagation of the two processes. We study the behavior of this model using a mean-field approach that gives results in good agreement with Monte Carlo simulations on homogeneous complex networks. We find that increasing the rate of information dissemination reduces the disease prevalence, as one may expect. However, increasing the speed of the information process as compared to that of the epidemic process has the counterintuitive effect of increasing the disease prevalence. This result opens an interesting discussion about the effects of information spreading on disease propagation
Identification of redundant and synergetic circuits in triplets of electrophysiological data
Neural systems are comprised of interacting units, and relevant information
regarding their function or malfunction can be inferred by analyzing the
statistical dependencies between the activity of each unit. Whilst correlations
and mutual information are commonly used to characterize these dependencies,
our objective here is to extend interactions to triplets of variables to better
detect and characterize dynamic information transfer. Our approach relies on
the measure of interaction information (II). The sign of II provides
information as to the extent to which the interaction of variables in triplets
is redundant (R) or synergetic (S). Here, based on this approach, we calculated
the R and S status for triplets of electrophysiological data recorded from
drug-resistant patients with mesial temporal lobe epilepsy in order to study
the spatial organization and dynamics of R and S close to the epileptogenic
zone (the area responsible for seizure propagation). In terms of spatial
organization, our results show that R matched the epileptogenic zone while S
was distributed more in the surrounding area. In relation to dynamics, R made
the largest contribution to high frequency bands (14-100Hz), whilst S was
expressed more strongly at lower frequencies (1-7Hz). Thus, applying
interaction information to such clinical data reveals new aspects of
epileptogenic structure in terms of the nature (redundancy vs. synergy) and
dynamics (fast vs. slow rhythms) of the interactions. We expect this
methodology, robust and simple, can reveal new aspects beyond pair-interactions
in networks of interacting units in other setups with multi-recording data sets
(and thus, not necessarily in epilepsy, the pathology we have approached here).Comment: 31 pages, 6 figures, 3 supplementary figures. To appear in the
Journal of Neural Engineering in its current for
- …