1,716 research outputs found
Community Structure Characterization
This entry discusses the problem of describing some communities identified in
a complex network of interest, in a way allowing to interpret them. We suppose
the community structure has already been detected through one of the many
methods proposed in the literature. The question is then to know how to extract
valuable information from this first result, in order to allow human
interpretation. This requires subsequent processing, which we describe in the
rest of this entry
Generating Robust and Efficient Networks Under Targeted Attacks
Much of our commerce and traveling depend on the efficient operation of large
scale networks. Some of those, such as electric power grids, transportation
systems, communication networks, and others, must maintain their efficiency
even after several failures, or malicious attacks. We outline a procedure that
modifies any given network to enhance its robustness, defined as the size of
its largest connected component after a succession of attacks, whilst keeping a
high efficiency, described in terms of the shortest paths among nodes. We also
show that this generated set of networks is very similar to networks optimized
for robustness in several aspects such as high assortativity and the presence
of an onion-like structure
Evolving Clustered Random Networks
We propose a Markov chain simulation method to generate simple connected
random graphs with a specified degree sequence and level of clustering. The
networks generated by our algorithm are random in all other respects and can
thus serve as generic models for studying the impacts of degree distributions
and clustering on dynamical processes as well as null models for detecting
other structural properties in empirical networks
Searching for network modules
When analyzing complex networks a key target is to uncover their modular
structure, which means searching for a family of modules, namely node subsets
spanning each a subnetwork more densely connected than the average. This work
proposes a novel type of objective function for graph clustering, in the form
of a multilinear polynomial whose coefficients are determined by network
topology. It may be thought of as a potential function, to be maximized, taking
its values on fuzzy clusterings or families of fuzzy subsets of nodes over
which every node distributes a unit membership. When suitably parametrized,
this potential is shown to attain its maximum when every node concentrates its
all unit membership on some module. The output thus is a partition, while the
original discrete optimization problem is turned into a continuous version
allowing to conceive alternative search strategies. The instance of the problem
being a pseudo-Boolean function assigning real-valued cluster scores to node
subsets, modularity maximization is employed to exemplify a so-called quadratic
form, in that the scores of singletons and pairs also fully determine the
scores of larger clusters, while the resulting multilinear polynomial potential
function has degree 2. After considering further quadratic instances, different
from modularity and obtained by interpreting network topology in alternative
manners, a greedy local-search strategy for the continuous framework is
analytically compared with an existing greedy agglomerative procedure for the
discrete case. Overlapping is finally discussed in terms of multiple runs, i.e.
several local searches with different initializations.Comment: 10 page
Efficient and exact sampling of simple graphs with given arbitrary degree sequence
Uniform sampling from graphical realizations of a given degree sequence is a
fundamental component in simulation-based measurements of network observables,
with applications ranging from epidemics, through social networks to Internet
modeling. Existing graph sampling methods are either link-swap based
(Markov-Chain Monte Carlo algorithms) or stub-matching based (the Configuration
Model). Both types are ill-controlled, with typically unknown mixing times for
link-swap methods and uncontrolled rejections for the Configuration Model. Here
we propose an efficient, polynomial time algorithm that generates statistically
independent graph samples with a given, arbitrary, degree sequence. The
algorithm provides a weight associated with each sample, allowing the
observable to be measured either uniformly over the graph ensemble, or,
alternatively, with a desired distribution. Unlike other algorithms, this
method always produces a sample, without back-tracking or rejections. Using a
central limit theorem-based reasoning, we argue, that for large N, and for
degree sequences admitting many realizations, the sample weights are expected
to have a lognormal distribution. As examples, we apply our algorithm to
generate networks with degree sequences drawn from power-law distributions and
from binomial distributions.Comment: 8 pages, 3 figure
Trust transitivity in social networks
Non-centralized recommendation-based decision making is a central feature of
several social and technological processes, such as market dynamics,
peer-to-peer file-sharing and the web of trust of digital certification. We
investigate the properties of trust propagation on networks, based on a simple
metric of trust transitivity. We investigate analytically the percolation
properties of trust transitivity in random networks with arbitrary degree
distribution, and compare with numerical realizations. We find that the
existence of a non-zero fraction of absolute trust (i.e. entirely confident
trust) is a requirement for the viability of global trust propagation in large
systems: The average pair-wise trust is marked by a discontinuous transition at
a specific fraction of absolute trust, below which it vanishes. Furthermore, we
perform an extensive analysis of the Pretty Good Privacy (PGP) web of trust, in
view of the concepts introduced. We compare different scenarios of trust
distribution: community- and authority-centered. We find that these scenarios
lead to sharply different patterns of trust propagation, due to the segregation
of authority hubs and densely-connected communities. While the
authority-centered scenario is more efficient, and leads to higher average
trust values, it favours weakly-connected "fringe" nodes, which are directly
trusted by authorities. The community-centered scheme, on the other hand,
favours nodes with intermediate degrees, in detriment of the authorities and
its "fringe" peers.Comment: 11 pages, 9 figures (with minor corrections
Worldwide food recall patterns over an eleven month period: A country perspective.
<p>Abstract</p> <p>Background</p> <p>Following the World Health Organization Forum in November 2007, the Beijing Declaration recognized the importance of food safety along with the rights of all individuals to a safe and adequate diet. The aim of this study is to retrospectively analyze the patterns in food alert and recall by countries to identify the principal hazard generators and gatekeepers of food safety in the eleven months leading up to the Declaration.</p> <p>Methods</p> <p>The food recall data set was collected by the Laboratory of the Government Chemist (LGC, UK) over the period from January to November 2007. Statistics were computed with the focus reporting patterns by the 117 countries. The complexity of the recorded interrelations was depicted as a network constructed from structural properties contained in the data. The analysed network properties included degrees, weighted degrees, modularity and <it>k</it>-core decomposition. Network analyses of the reports, based on 'country making report' (<it>detector</it>) and 'country reported on' (<it>transgressor</it>), revealed that the network is organized around a dominant core.</p> <p>Results</p> <p>Ten countries were reported for sixty per cent of all faulty products marketed, with the top 5 countries having received between 100 to 281 reports. Further analysis of the dominant core revealed that out of the top five transgressors three made no reports (in the order China > Turkey > Iran). The top ten detectors account for three quarters of reports with three > 300 (Italy: 406, Germany: 340, United Kingdom: 322).</p> <p>Conclusion</p> <p>Of the 117 countries studied, the vast majority of food reports are made by 10 countries, with EU countries predominating. The majority of the faulty foodstuffs originate in ten countries with four major producers making no reports. This pattern is very distant from that proposed by the Beijing Declaration which urges all countries to take responsibility for the provision of safe and adequate diets for their nationals.</p
Consensus clustering in complex networks
The community structure of complex networks reveals both their organization
and hidden relationships among their constituents. Most community detection
methods currently available are not deterministic, and their results typically
depend on the specific random seeds, initial conditions and tie-break rules
adopted for their execution. Consensus clustering is used in data analysis to
generate stable results out of a set of partitions delivered by stochastic
methods. Here we show that consensus clustering can be combined with any
existing method in a self-consistent way, enhancing considerably both the
stability and the accuracy of the resulting partitions. This framework is also
particularly suitable to monitor the evolution of community structure in
temporal networks. An application of consensus clustering to a large citation
network of physics papers demonstrates its capability to keep track of the
birth, death and diversification of topics.Comment: 11 pages, 12 figures. Published in Scientific Report
- …