14 research outputs found

    Fundamental Structural Constraint of Random Scale-Free Networks

    Full text link
    We study the structural constraint of random scale-free networks that determines possible combinations of the degree exponent Îł\gamma and the upper cutoff kck_c in the thermodynamic limit. We employ the framework of graphicality transitions proposed by [Del Genio and co-workers, Phys. Rev. Lett. {\bf 107}, 178701 (2011)], while making it more rigorous and applicable to general values of kc. Using the graphicality criterion, we show that the upper cutoff must be lower than kcN1/Îłk_c N^{1/\gamma} for Îł<2\gamma < 2, whereas any upper cutoff is allowed for Îł>2\gamma > 2. This result is also numerically verified by both the random and deterministic sampling of degree sequences.Comment: 5 pages, 4 figures (7 eps files), 2 tables; published versio

    All scale-free networks are sparse

    Get PDF
    We study the realizability of scale free-networks with a given degree sequence, showing that the fraction of realizable sequences undergoes two first-order transitions at the values 0 and 2 of the power-law exponent. We substantiate this finding by analytical reasoning and by a numerical method, proposed here, based on extreme value arguments, which can be applied to any given degree distribution. Our results reveal a fundamental reason why large scale-free networks without constraints on minimum and maximum degree must be sparse.Comment: 4 pages, 2 figure

    Graphicality: why is there not such a word?

    Get PDF
    The concept of graphicality first appeared in the work of Edgar Allan Poe. Taking its title from Poe’s painterly metaphor, this paper seeks to understand how graphicality may inform aspects of design thinking that have been neglected. We explore the current use, origins and aspects of graphicality, and contextualise it in some real world scenarios to reaffirm how we live in a graphic age, and how graphicality must be better understood in the way we comprehend other displays of human ability, such as musicality. Poe provides us with a starting point for relating the physical and mental domains of image interpretation. Graphicality is shown to work on a continuum between subjectivity and objectivity, not as something to be measured but appreciated in how it enhances understanding and knowledge. This has implications for many academic disciplines, specifically in how it enhances our appreciation of the graphic in graphic design

    Approximate entropy of network parameters

    Get PDF
    We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We firstly define a purely structural entropy obtained by computing the approximate entropy of the so called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erd\H{o}s-R\'enyi networks. By using classical results of Pincus, we show that our entropy measure converges with network size to a certain binary Shannon entropy. On a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs permit to naturally associate to a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.Comment: 11 pages, 5 EPS figure

    The phase transition in inhomogeneous random graphs

    Full text link
    We introduce a very general model of an inhomogenous random graph with independence between the edges, which scales so that the number of edges is linear in the number of vertices. This scaling corresponds to the p=c/n scaling for G(n,p) used to study the phase transition; also, it seems to be a property of many large real-world graphs. Our model includes as special cases many models previously studied. We show that under one very weak assumption (that the expected number of edges is `what it should be'), many properties of the model can be determined, in particular the critical point of the phase transition, and the size of the giant component above the transition. We do this by relating our random graphs to branching processes, which are much easier to analyze. We also consider other properties of the model, showing, for example, that when there is a giant component, it is `stable': for a typical random graph, no matter how we add or delete o(n) edges, the size of the giant component does not change by more than o(n).Comment: 135 pages; revised and expanded slightly. To appear in Random Structures and Algorithm

    On the use of generating functions for topics in clustered networks

    Get PDF
    In this thesis we relax the locally tree-like assumption of conïŹguration model random networks to examine the properties of clustering, and the effects thereof, on bond percolation. We introduce an algorithmic enumeration method to evaluate the probability that a vertex remains unattached to the giant connected component during percolation. The properties of the non-giant, ïŹnite components of clustered networks are also examined, along with the degree correlations between subgraphs. In a second avenue of research, we investigate the role of clustering on 2-strain epidemic processes under various disease interaction schedules. We then examine an -generation epidemic by performing repeated percolation events

    Inférence et réseaux complexes

    Get PDF
    Tableau d'honneur de la FacultĂ© des Ă©tudes supĂ©rieures et postdoctorales, 2018-2019Les objets d’études de la science moderne sont souvent complexes : sociĂ©tĂ©s, pandĂ©mies, grilles Ă©lectriques, niches Ă©cologiques, etc. La science des rĂ©seaux cherche Ă  mieux com- prendre ces systĂšmes en examinant leur structure. Elle fait abstraction du dĂ©tail, en rĂ©dui- sant tout systĂšme Ă  une simple collection de noeuds (les Ă©lĂ©ments constitutifs du systĂšme) connectĂ©s par des liens (interactions). Fort d’une vingtaine d’annĂ©es de recherche, on peut constater que cette approche a menĂ© Ă  de grands succĂšs scientifiques. Cette thĂšse est consacrĂ©e Ă  l’intersection entre la science des rĂ©seaux et l’infĂ©rence statistique. On y traite de deux problĂšmes d’infĂ©rence classiques : estimation et test d’hypothĂšses. La partie principale de la thĂšse est dĂ©diĂ©e Ă  l’estimation. Dans un premier temps, on Ă©tu- die un modĂšle gĂ©nĂ©ratif bien connu (le modĂšle stochastique par blocs), dĂ©veloppĂ© dans le but d’identifier les rĂ©gularitĂ©s de la structure des rĂ©seaux complexes. Les contributions origi- nales de cette partie sont (a) l’unification de la grande majoritĂ© des mĂ©thodes de dĂ©tection de rĂ©gularitĂ©s sous l’égide du modĂšle par blocs, et (b) une analyse en taille finie de la cohĂ©rence de ce modĂšle. La combinaison de ces analyses place l’ensemble des mĂ©thodes de dĂ©tection de rĂ©gularitĂ©s sur des bases statistiques solides. Dans un deuxiĂšme temps, on se penche sur le problĂšme de la reconstruction du passĂ© d’un rĂ©seau, Ă  partir d’une seule observation. À nouveau, on l’aborde Ă  l’aide de modĂšles gĂ©nĂ©ratifs, le transformant ainsi en un problĂšme d’estimation. Les rĂ©sultats principaux de cette partie sont des mĂ©thodes algorithmiques per- mettant de solutionner la reconstruction efficacement, et l’identification d’une transition de phase dans la qualitĂ© de la reconstruction, lorsque le niveau d’inĂ©galitĂ© des rĂ©seaux Ă©tudiĂ©s est variĂ©. On se penche finalement sur un traitement par test d’hypothĂšses des systĂšmes complexes. Cette partie, plus succincte, est prĂ©sentĂ©e dans un langage mathĂ©matique plus gĂ©nĂ©ral que celui des rĂ©seaux, soit celui des complexes simpliciaux. On obtient un modĂšle alĂ©atoire pour complexe simplicial, ainsi qu’un algorithme d’échantillonnage efficace pour ce modĂšle. On termine en montrant qu’on peut utiliser ces outils pour tester des hypothĂšses sur la structure des systĂšmes complexes rĂ©els, via une propriĂ©tĂ© inaccessible dans la reprĂ©sentation rĂ©seau (les groupes d’homologie des complexes).Modern science is often concerned with complex objects of inquiry: intricate webs of social interactions, pandemics, power grids, ecological niches under climatological pressure, etc. When the goal is to gain insights into the function and mechanism of these complex systems, a possible approach is to map their structure using a collection of nodes (the parts of the systems) connected by edges (their interactions). The resulting complex networks capture the structural essence of these systems. Years of successes show that the network abstraction often suffices to understand a plethora of complex phenomena. It can be argued that a principled and rigorous approach to data analysis is chief among the challenges faced by network science today. With this in mind, the goal of this thesis is to tackle a number of important problems at the intersection of network science and statistical inference, of two types: The problems of estimations and the testing of hypotheses. Most of the thesis is devoted to estimation problems. We begin with a thorough analysis of a well-known generative model (the stochastic block model), introduced 40 years ago to identify patterns and regularities in the structure of real networks. The main original con- tributions of this part are (a) the unification of the majority of known regularity detection methods under the stochastic block model, and (b) a thorough characterization of its con- sistency in the finite-size regime. Together, these two contributions put regularity detection methods on firmer statistical foundations. We then turn to a completely different estimation problem: The reconstruction of the past of complex networks, from a single snapshot. The unifying theme is our statistical treatment of this problem, again based on generative model- ing. Our major results are: the inference framework itself; an efficient history reconstruction method; and the discovery of a phase transition in the recoverability of history, driven by inequalities (the more unequal, the harder the reconstruction problem). We conclude with a short section, where we investigate hypothesis testing in complex sys- tems. This epilogue is framed in the broader mathematical context of simplicial complexes, a natural generalization of complex networks. We obtain a random model for these objects, and the associated efficient sampling algorithm. We finish by showing how these tools can be used to test hypotheses about the structure of real systems, using their homology groups
    corecore