14 research outputs found
Fundamental Structural Constraint of Random Scale-Free Networks
We study the structural constraint of random scale-free networks that
determines possible combinations of the degree exponent and the upper
cutoff in the thermodynamic limit. We employ the framework of
graphicality transitions proposed by [Del Genio and co-workers, Phys. Rev.
Lett. {\bf 107}, 178701 (2011)], while making it more rigorous and applicable
to general values of kc. Using the graphicality criterion, we show that the
upper cutoff must be lower than for , whereas
any upper cutoff is allowed for . This result is also numerically
verified by both the random and deterministic sampling of degree sequences.Comment: 5 pages, 4 figures (7 eps files), 2 tables; published versio
All scale-free networks are sparse
We study the realizability of scale free-networks with a given degree
sequence, showing that the fraction of realizable sequences undergoes two
first-order transitions at the values 0 and 2 of the power-law exponent. We
substantiate this finding by analytical reasoning and by a numerical method,
proposed here, based on extreme value arguments, which can be applied to any
given degree distribution. Our results reveal a fundamental reason why large
scale-free networks without constraints on minimum and maximum degree must be
sparse.Comment: 4 pages, 2 figure
Graphicality: why is there not such a word?
The concept of graphicality first appeared in the work of Edgar Allan Poe. Taking its title from Poeâs painterly metaphor, this paper seeks to understand how graphicality may inform aspects of design thinking that have been neglected. We explore the current use, origins and aspects of graphicality, and contextualise it in some real world scenarios to reaffirm how we live in a graphic age, and how graphicality must be better understood in the way we comprehend other displays of human ability, such as musicality. Poe provides us with a starting point for relating the physical and mental domains of image interpretation. Graphicality is shown to work on a continuum between subjectivity and objectivity, not as something to be measured but appreciated in how it enhances understanding and knowledge. This has implications for many academic disciplines, specifically in how it enhances our appreciation of the graphic in graphic design
Approximate entropy of network parameters
We study the notion of approximate entropy within the framework of network
theory. Approximate entropy is an uncertainty measure originally proposed in
the context of dynamical systems and time series. We firstly define a purely
structural entropy obtained by computing the approximate entropy of the so
called slide sequence. This is a surrogate of the degree sequence and it is
suggested by the frequency partition of a graph. We examine this quantity for
standard scale-free and Erd\H{o}s-R\'enyi networks. By using classical results
of Pincus, we show that our entropy measure converges with network size to a
certain binary Shannon entropy. On a second step, with specific attention to
networks generated by dynamical processes, we investigate approximate entropy
of horizontal visibility graphs. Visibility graphs permit to naturally
associate to a network the notion of temporal correlations, therefore providing
the measure a dynamical garment. We show that approximate entropy distinguishes
visibility graphs generated by processes with different complexity. The result
probes to a greater extent these networks for the study of dynamical systems.
Applications to certain biological data arising in cancer genomics are finally
considered in the light of both approaches.Comment: 11 pages, 5 EPS figure
Recommended from our members
Illuminating meaningful diversity in complex feature spaces through adaptive grid-based genetic algorithms
In many fields there exist problems for which multiple solutions of suitably high performance may be found across distinct regions of the search space. Optimisation of the search towards including these distinct solutions is important not only to understanding these spaces but also to avoiding local optima. This is the goal of a type of genetic algorithms called illumination algorithms. In Chapter 2, we demonstrate the use of an illumination algorithm in the exploration of networks sharing only a given set of structural features (valid networks). This method produces a population of valid networks that are more diverse than those produced using state of the art methods, however, it was found to be too inefficient to be usable in real-world problems. Additionally, setting an appropriate resolution of the search requires some amount of prior knowledge of the space of solutions. Addressing this problem is the focus of Chapter 3, in which we develop three extensions to the method: a) an exact method of mutation whereby only valid networks are explored, b) an adaptive mechanism for setting the resolution of the search, c) a principle for tuning mutations parameters to the searchâ s resolution. We show that with these additions our method is able to increase the diversity of solutions found in significantly fewer iterations. Finally, in Chapter 4 we expand our method for use in more general problem spaces. We benchmark it against the state of the art. In all tested landscapes, we show that our method is able to identify more meaningful niches in the spaces in the same number of iterations. We conclude by highlighting the limits of our framework and discuss further directions
The phase transition in inhomogeneous random graphs
We introduce a very general model of an inhomogenous random graph with
independence between the edges, which scales so that the number of edges is
linear in the number of vertices. This scaling corresponds to the p=c/n scaling
for G(n,p) used to study the phase transition; also, it seems to be a property
of many large real-world graphs. Our model includes as special cases many
models previously studied.
We show that under one very weak assumption (that the expected number of
edges is `what it should be'), many properties of the model can be determined,
in particular the critical point of the phase transition, and the size of the
giant component above the transition. We do this by relating our random graphs
to branching processes, which are much easier to analyze.
We also consider other properties of the model, showing, for example, that
when there is a giant component, it is `stable': for a typical random graph, no
matter how we add or delete o(n) edges, the size of the giant component does
not change by more than o(n).Comment: 135 pages; revised and expanded slightly. To appear in Random
Structures and Algorithm
On the use of generating functions for topics in clustered networks
In this thesis we relax the locally tree-like assumption of conïŹguration model
random networks to examine the properties of clustering, and the effects
thereof, on bond percolation. We introduce an algorithmic enumeration
method to evaluate the probability that a vertex remains unattached to the giant
connected component during percolation. The properties of the non-giant,
ïŹnite components of clustered networks are also examined, along with the
degree correlations between subgraphs. In a second avenue of research, we
investigate the role of clustering on 2-strain epidemic processes under various
disease interaction schedules. We then examine an -generation epidemic by
performing repeated percolation events
Inférence et réseaux complexes
Tableau d'honneur de la FacultĂ© des Ă©tudes supĂ©rieures et postdoctorales, 2018-2019Les objets dâĂ©tudes de la science moderne sont souvent complexes : sociĂ©tĂ©s, pandĂ©mies, grilles Ă©lectriques, niches Ă©cologiques, etc. La science des rĂ©seaux cherche Ă mieux com- prendre ces systĂšmes en examinant leur structure. Elle fait abstraction du dĂ©tail, en rĂ©dui- sant tout systĂšme Ă une simple collection de noeuds (les Ă©lĂ©ments constitutifs du systĂšme) connectĂ©s par des liens (interactions). Fort dâune vingtaine dâannĂ©es de recherche, on peut constater que cette approche a menĂ© Ă de grands succĂšs scientifiques. Cette thĂšse est consacrĂ©e Ă lâintersection entre la science des rĂ©seaux et lâinfĂ©rence statistique. On y traite de deux problĂšmes dâinfĂ©rence classiques : estimation et test dâhypothĂšses. La partie principale de la thĂšse est dĂ©diĂ©e Ă lâestimation. Dans un premier temps, on Ă©tu- die un modĂšle gĂ©nĂ©ratif bien connu (le modĂšle stochastique par blocs), dĂ©veloppĂ© dans le but dâidentifier les rĂ©gularitĂ©s de la structure des rĂ©seaux complexes. Les contributions origi- nales de cette partie sont (a) lâunification de la grande majoritĂ© des mĂ©thodes de dĂ©tection de rĂ©gularitĂ©s sous lâĂ©gide du modĂšle par blocs, et (b) une analyse en taille finie de la cohĂ©rence de ce modĂšle. La combinaison de ces analyses place lâensemble des mĂ©thodes de dĂ©tection de rĂ©gularitĂ©s sur des bases statistiques solides. Dans un deuxiĂšme temps, on se penche sur le problĂšme de la reconstruction du passĂ© dâun rĂ©seau, Ă partir dâune seule observation. Ă nouveau, on lâaborde Ă lâaide de modĂšles gĂ©nĂ©ratifs, le transformant ainsi en un problĂšme dâestimation. Les rĂ©sultats principaux de cette partie sont des mĂ©thodes algorithmiques per- mettant de solutionner la reconstruction efficacement, et lâidentification dâune transition de phase dans la qualitĂ© de la reconstruction, lorsque le niveau dâinĂ©galitĂ© des rĂ©seaux Ă©tudiĂ©s est variĂ©. On se penche finalement sur un traitement par test dâhypothĂšses des systĂšmes complexes. Cette partie, plus succincte, est prĂ©sentĂ©e dans un langage mathĂ©matique plus gĂ©nĂ©ral que celui des rĂ©seaux, soit celui des complexes simpliciaux. On obtient un modĂšle alĂ©atoire pour complexe simplicial, ainsi quâun algorithme dâĂ©chantillonnage efficace pour ce modĂšle. On termine en montrant quâon peut utiliser ces outils pour tester des hypothĂšses sur la structure des systĂšmes complexes rĂ©els, via une propriĂ©tĂ© inaccessible dans la reprĂ©sentation rĂ©seau (les groupes dâhomologie des complexes).Modern science is often concerned with complex objects of inquiry: intricate webs of social interactions, pandemics, power grids, ecological niches under climatological pressure, etc. When the goal is to gain insights into the function and mechanism of these complex systems, a possible approach is to map their structure using a collection of nodes (the parts of the systems) connected by edges (their interactions). The resulting complex networks capture the structural essence of these systems. Years of successes show that the network abstraction often suffices to understand a plethora of complex phenomena. It can be argued that a principled and rigorous approach to data analysis is chief among the challenges faced by network science today. With this in mind, the goal of this thesis is to tackle a number of important problems at the intersection of network science and statistical inference, of two types: The problems of estimations and the testing of hypotheses. Most of the thesis is devoted to estimation problems. We begin with a thorough analysis of a well-known generative model (the stochastic block model), introduced 40 years ago to identify patterns and regularities in the structure of real networks. The main original con- tributions of this part are (a) the unification of the majority of known regularity detection methods under the stochastic block model, and (b) a thorough characterization of its con- sistency in the finite-size regime. Together, these two contributions put regularity detection methods on firmer statistical foundations. We then turn to a completely different estimation problem: The reconstruction of the past of complex networks, from a single snapshot. The unifying theme is our statistical treatment of this problem, again based on generative model- ing. Our major results are: the inference framework itself; an efficient history reconstruction method; and the discovery of a phase transition in the recoverability of history, driven by inequalities (the more unequal, the harder the reconstruction problem). We conclude with a short section, where we investigate hypothesis testing in complex sys- tems. This epilogue is framed in the broader mathematical context of simplicial complexes, a natural generalization of complex networks. We obtain a random model for these objects, and the associated efficient sampling algorithm. We finish by showing how these tools can be used to test hypotheses about the structure of real systems, using their homology groups