1,024 research outputs found
Complexity of Networks
Network or graph structures are ubiquitous in the study of complex systems.
Often, we are interested in complexity trends of these system as it evolves
under some dynamic. An example might be looking at the complexity of a food web
as species enter an ecosystem via migration or speciation, and leave via
extinction.
In this paper, a complexity measure of networks is proposed based on the {\em
complexity is information content} paradigm. To apply this paradigm to any
object, one must fix two things: a representation language, in which strings of
symbols from some alphabet describe, or stand for the objects being considered;
and a means of determining when two such descriptions refer to the same object.
With these two things set, the information content of an object can be computed
in principle from the number of equivalent descriptions describing a particular
object.
I propose a simple representation language for undirected graphs that can be
encoded as a bitstring, and equivalence is a topological equivalence. I also
present an algorithm for computing the complexity of an arbitrary undirected
network.Comment: Accepted for Australian Conference on Artificial Life (ACAL05). To
appear in Advances in Natural Computation (World Scientific
Complexity of Networks (reprise)
Network or graph structures are ubiquitous in the study of complex systems.
Often, we are interested in complexity trends of these system as it evolves
under some dynamic. An example might be looking at the complexity of a food web
as species enter an ecosystem via migration or speciation, and leave via
extinction.
In a previous paper, a complexity measure of networks was proposed based on
the {\em complexity is information content} paradigm. To apply this paradigm to
any object, one must fix two things: a representation language, in which
strings of symbols from some alphabet describe, or stand for the objects being
considered; and a means of determining when two such descriptions refer to the
same object. With these two things set, the information content of an object
can be computed in principle from the number of equivalent descriptions
describing a particular object.
The previously proposed representation language had the deficiency that the
fully connected and empty networks were the most complex for a given number of
nodes. A variation of this measure, called zcomplexity, applied a compression
algorithm to the resulting bitstring representation, to solve this problem.
Unfortunately, zcomplexity proved too computationally expensive to be
practical.
In this paper, I propose a new representation language that encodes the
number of links along with the number of nodes and a representation of the
linklist. This, like zcomplexity, exhibits minimal complexity for fully
connected and empty networks, but is as tractable as the original measure.
...Comment: Accepted in Complexit
Entropy measures for complex networks: Toward an information theory of complex topologies
The quantification of the complexity of networks is, today, a fundamental
problem in the physics of complex systems. A possible roadmap to solve the
problem is via extending key concepts of information theory to networks. In
this paper we propose how to define the Shannon entropy of a network ensemble
and how it relates to the Gibbs and von Neumann entropies of network ensembles.
The quantities we introduce here will play a crucial role for the formulation
of null models of networks through maximum-entropy arguments and will
contribute to inference problems emerging in the field of complex networks.Comment: (4 pages, 1 figure
Probabilistic Bounds on Complexity of Networks Computing Binary Classification Tasks
Complexity of feedforward networks computing binary classification tasks is investigated. To deal with unmanageably large number of these tasks on domains of even moderate sizes, a probabilistic model characterizing relevance of the classification tasks is introduced. Approximate measures of sparsity of networks computing randomly chosen functions are studied in terms of variational norms tailored to dictionaries of computational units. Probabilistic lower bounds on these norms are derived using the Chernoff-Hoeffding Bound on sums of independent random variables, which need not be identically distributed. Consequences of the probabilistic results on the choice of dictionaries of computational units are discussed
Governing Networks and Rule-Making in Cyberspace
The global network environment defies traditional regulatory theories and policymaking practices. At present, policymakers and private sector organizations are searching for appropriate regulatory strategies to encourage and channel the global information infrastructure (“GII”). Most attempts to define new rules for the development of the GII rely on disintegrating concepts of territory and sector, while ignoring the new network and technological borders that transcend national boundaries. The GII creates new models and sources for rules. Policy leadership requires a fresh approach to the governance of global networks. Instead of foundering on old concepts, the GII requires a new paradigm for governance that recognizes the complexity of networks, builds constructive relationships among the various participants (including governments, systems operators, information providers, and citizens), and promotes incentives for the attainment of various public policy objectives in the private sector
Modelling EGFR signalling cascade using continuous membrane systems
The complexity of networks of biological signalling pathways
is such that the development of simplifying models is essential in trying to
understand the wide-ranging cellular responses they can generate. In this
paper a continuous variant of membrane systems is introduced and used
to model the epidermal growth factor receptor signalling network which
is known to play a key role in tumour cell proliferation, angiogenesis and
metastasis.Ministerio de Ciencia y Tecnología TIC2002-04220-C03-0
Analysis and evaluation of the entropy indices of a static network structure
Although degree distribution entropy (DDE), SD structure entropy (SDSE), Wu structure entropy (WSE) and FB structure entropy (FBSE) are four static network structure entropy indices widely used to quantify the heterogeneity of a complex network, previous studies have paid little attention to their differing abilities to describe network structure. We calculate these four structure entropies for four benchmark networks and compare the results by measuring the ability of each index to characterize network heterogeneity. We find that SDSE and FBSE more accurately characterize network heterogeneity than WSE and DDE. We also find that existing benchmark networks fail to distinguish SDSE and FBSE because they cannot discriminate local and global network heterogeneity. We solve this problem by proposing an evolving caveman network that reveals the differences between structure entropy indices by comparing the sensitivities during the network evolutionary process. Mathematical analysis and computational simulation both indicate that FBSE describes the global topology variation in the evolutionary process of a caveman network, and that the other three structure entropy indices reflect only local network heterogeneity. Our study offers an expansive view of the structural complexity of networks and expands our understanding of complex network behavior.The authors would like to thank the financial support of the National Natural Science Foundation of China (71501153), Natural Science Foundation of Shaanxi Province of China (2016JQ6072), and the Foundation of China Scholarship Council (201506965039, 201606965057). (71501153 - National Natural Science Foundation of China; 2016JQ6072 - Natural Science Foundation of Shaanxi Province of China; 201506965039 - Foundation of China Scholarship Council; 201606965057 - Foundation of China Scholarship Council)Published versio
- …