122,064 research outputs found
Phase ordering in disordered and inhomogeneous systems
We study numerically the coarsening dynamics of the Ising model on a regular
lattice with random bonds and on deterministic fractal substrates. We propose a
unifying interpretation of the phase-ordering processes based on two classes of
dynamical behaviors characterized by different growth-laws of the ordered
domains size - logarithmic or power-law respectively. It is conjectured that
the interplay between these dynamical classes is regulated by the same
topological feature which governs the presence or the absence of a
finite-temperature phase-transition.Comment: 15 pages, 7 figures. To appear on Physical Review E (2015
Duality between Ahlfors-Liouville and Khas'minskii properties for nonlinear equations
In recent years, the study of the interplay between (fully) non-linear
potential theory and geometry received important new impulse. The purpose of
this work is to move a step further in this direction by investigating
appropriate versions of parabolicity and maximum principles at infinity for
large classes of non-linear (sub)equations on manifolds. The main goal is
to show a unifying duality between such properties and the existence of
suitable -subharmonic exhaustions, called Khas'minskii potentials, which is
new even for most of the "standard" operators arising from geometry, and
improves on partial results in the literature. Applications include new
characterizations of the classical maximum principles at infinity (Ekeland,
Omori-Yau and their weak versions by Pigola-Rigoli-Setti) and of conservation
properties for stochastic processes (martingale completeness). Applications to
the theory of submanifolds and Riemannian submersions are also discussed.Comment: 67 pages. Final versio
Systemic Risk in a Unifying Framework for Cascading Processes on Networks
We introduce a general framework for models of cascade and contagion
processes on networks, to identify their commonalities and differences. In
particular, models of social and financial cascades, as well as the fiber
bundle model, the voter model, and models of epidemic spreading are recovered
as special cases. To unify their description, we define the net fragility of a
node, which is the difference between its fragility and the threshold that
determines its failure. Nodes fail if their net fragility grows above zero and
their failure increases the fragility of neighbouring nodes, thus possibly
triggering a cascade. In this framework, we identify three classes depending on
the way the fragility of a node is increased by the failure of a neighbour. At
the microscopic level, we illustrate with specific examples how the failure
spreading pattern varies with the node triggering the cascade, depending on its
position in the network and its degree. At the macroscopic level, systemic risk
is measured as the final fraction of failed nodes, , and for each of
the three classes we derive a recursive equation to compute its value. The
phase diagram of as a function of the initial conditions, thus allows
for a prediction of the systemic risk as well as a comparison of the three
different model classes. We could identify which model class lead to a
first-order phase transition in systemic risk, i.e. situations where small
changes in the initial conditions may lead to a global failure. Eventually, we
generalize our framework to encompass stochastic contagion models. This
indicates the potential for further generalizations.Comment: 43 pages, 16 multipart figure
Probabilistic Inductive Classes of Graphs
Models of complex networks are generally defined as graph stochastic
processes in which edges and vertices are added or deleted over time to
simulate the evolution of networks. Here, we define a unifying framework -
probabilistic inductive classes of graphs - for formalizing and studying
evolution of complex networks. Our definition of probabilistic inductive class
of graphs (PICG) extends the standard notion of inductive class of graphs (ICG)
by imposing a probability space. A PICG is given by: (1) class B of initial
graphs, the basis of PICG, (2) class R of generating rules, each with
distinguished left element to which the rule is applied to obtain the right
element, (3) probability distribution specifying how the initial graph is
chosen from class B, (4) probability distribution specifying how the rules from
class R are applied, and, finally, (5) probability distribution specifying how
the left elements for every rule in class R are chosen. We point out that many
of the existing models of growing networks can be cast as PICGs. We present how
the well known model of growing networks - the preferential attachment model -
can be studied as PICG. As an illustration we present results regarding the
size, order, and degree sequence for PICG models of connected and 2-connected
graphs.Comment: 15 pages, 6 figure
A Unifying Model for Representing Time-Varying Graphs
Graph-based models form a fundamental aspect of data representation in Data
Sciences and play a key role in modeling complex networked systems. In
particular, recently there is an ever-increasing interest in modeling dynamic
complex networks, i.e. networks in which the topological structure (nodes and
edges) may vary over time. In this context, we propose a novel model for
representing finite discrete Time-Varying Graphs (TVGs), which are typically
used to model dynamic complex networked systems. We analyze the data structures
built from our proposed model and demonstrate that, for most practical cases,
the asymptotic memory complexity of our model is in the order of the
cardinality of the set of edges. Further, we show that our proposal is an
unifying model that can represent several previous (classes of) models for
dynamic networks found in the recent literature, which in general are unable to
represent each other. In contrast to previous models, our proposal is also able
to intrinsically model cyclic (i.e. periodic) behavior in dynamic networks.
These representation capabilities attest the expressive power of our proposed
unifying model for TVGs. We thus believe our unifying model for TVGs is a step
forward in the theoretical foundations for data analysis of complex networked
systems.Comment: Also appears in the Proc. of the IEEE International Conference on
Data Science and Advanced Analytics (IEEE DSAA'2015
Information Compression, Intelligence, Computing, and Mathematics
This paper presents evidence for the idea that much of artificial
intelligence, human perception and cognition, mainstream computing, and
mathematics, may be understood as compression of information via the matching
and unification of patterns. This is the basis for the "SP theory of
intelligence", outlined in the paper and fully described elsewhere. Relevant
evidence may be seen: in empirical support for the SP theory; in some
advantages of information compression (IC) in terms of biology and engineering;
in our use of shorthands and ordinary words in language; in how we merge
successive views of any one thing; in visual recognition; in binocular vision;
in visual adaptation; in how we learn lexical and grammatical structures in
language; and in perceptual constancies. IC via the matching and unification of
patterns may be seen in both computing and mathematics: in IC via equations; in
the matching and unification of names; in the reduction or removal of
redundancy from unary numbers; in the workings of Post's Canonical System and
the transition function in the Universal Turing Machine; in the way computers
retrieve information from memory; in systems like Prolog; and in the
query-by-example technique for information retrieval. The chunking-with-codes
technique for IC may be seen in the use of named functions to avoid repetition
of computer code. The schema-plus-correction technique may be seen in functions
with parameters and in the use of classes in object-oriented programming. And
the run-length coding technique may be seen in multiplication, in division, and
in several other devices in mathematics and computing. The SP theory resolves
the apparent paradox of "decompression by compression". And computing and
cognition as IC is compatible with the uses of redundancy in such things as
backup copies to safeguard data and understanding speech in a noisy
environment
Unifying Requirements and Code: an Example
Requirements and code, in conventional software engineering wisdom, belong to
entirely different worlds. Is it possible to unify these two worlds? A unified
framework could help make software easier to change and reuse. To explore the
feasibility of such an approach, the case study reported here takes a classic
example from the requirements engineering literature and describes it using a
programming language framework to express both domain and machine properties.
The paper describes the solution, discusses its benefits and limitations, and
assesses its scalability.Comment: 13 pages; 7 figures; to appear in Ershov Informatics Conference, PSI,
Kazan, Russia (LNCS), 201
- …