68 research outputs found

    Adjustable reach in a network centrality based on current flows

    Full text link
    Centrality, which quantifies the "importance" of individual nodes, is among the most essential concepts in modern network theory. Most prominent centrality measures can be expressed as an aggregation of influence flows between pairs of nodes. As there are many ways in which influence can be defined, many different centrality measures are in use. Parametrized centralities allow further flexibility and utility by tuning the centrality calculation to the regime most appropriate for a given network. Here, we identify two categories of centrality parameters. Reach parameters control the attenuation of influence flows between distant nodes. Grasp parameters control the centrality's potential to send influence flows along multiple, often nongeodesic paths. Combining these categories with Borgatti's centrality types [S. P. Borgatti, Social Networks 27, 55-71 (2005)], we arrive at a novel classification system for parametrized centralities. Using this classification, we identify the notable absence of any centrality measures that are radial, reach parametrized, and based on acyclic, conservative flows of influence. We therefore introduce the ground-current centrality, which is a measure of precisely this type. Because of its unique position in the taxonomy, the ground-current centrality has significant advantages over similar centralities. We demonstrate that, compared to other conserved-flow centralities, it has a simpler mathematical description. Compared to other reach centralities, it robustly preserves an intuitive rank ordering across a wide range of network architectures. We also show that it produces a consistent distribution of centrality values among the nodes, neither trivially equally spread (delocalization), nor overly focused on a few nodes (localization). Other reach centralities exhibit both of these behaviors on regular networks and hub networks, respectively

    Message-Passing Methods for Complex Contagions

    Full text link
    Message-passing methods provide a powerful approach for calculating the expected size of cascades either on random networks (e.g., drawn from a configuration-model ensemble or its generalizations) asymptotically as the number NN of nodes becomes infinite or on specific finite-size networks. We review the message-passing approach and show how to derive it for configuration-model networks using the methods of (Dhar et al., 1997) and (Gleeson, 2008). Using this approach, we explain for such networks how to determine an analytical expression for a "cascade condition", which determines whether a global cascade will occur. We extend this approach to the message-passing methods for specific finite-size networks (Shrestha and Moore, 2014; Lokhov et al., 2015), and we derive a generalized cascade condition. Throughout this chapter, we illustrate these ideas using the Watts threshold model.Comment: 14 pages, 3 figure

    Predicting the epidemic threshold of the susceptible-infected-recovered model

    Get PDF
    Researchers have developed several theoretical methods for predicting epidemic thresholds, including the mean-field like (MFL) method, the quenched mean-field (QMF) method, and the dynamical message passing (DMP) method. When these methods are applied to predict epidemic threshold they often produce differing results and their relative levels of accuracy are still unknown. We systematically analyze these two issues---relationships among differing results and levels of accuracy---by studying the susceptible-infected-recovered (SIR) model on uncorrelated configuration networks and a group of 56 real-world networks. In uncorrelated configuration networks the MFL and DMP methods yield identical predictions that are larger and more accurate than the prediction generated by the QMF method. When compared to the 56 real-world networks, the epidemic threshold obtained by the DMP method is closer to the actual epidemic threshold because it incorporates full network topology information and some dynamical correlations. We find that in some scenarios---such as networks with positive degree-degree correlations, with an eigenvector localized on the high kk-core nodes, or with a high level of clustering---the epidemic threshold predicted by the MFL method, which uses the degree distribution as the only input parameter, performs better than the other two methods. We also find that the performances of the three predictions are irregular versus modularity

    Centrality metrics and localization in core-periphery networks

    Full text link
    Two concepts of centrality have been defined in complex networks. The first considers the centrality of a node and many different metrics for it has been defined (e.g. eigenvector centrality, PageRank, non-backtracking centrality, etc). The second is related to a large scale organization of the network, the core-periphery structure, composed by a dense core plus an outlying and loosely-connected periphery. In this paper we investigate the relation between these two concepts. We consider networks generated via the Stochastic Block Model, or its degree corrected version, with a strong core-periphery structure and we investigate the centrality properties of the core nodes and the ability of several centrality metrics to identify them. We find that the three measures with the best performance are marginals obtained with belief propagation, PageRank, and degree centrality, while non-backtracking and eigenvector centrality (or MINRES}, showed to be equivalent to the latter in the large network limit) perform worse in the investigated networks.Comment: 15 pages, 8 figure

    Network centrality: an introduction

    Full text link
    Centrality is a key property of complex networks that influences the behavior of dynamical processes, like synchronization and epidemic spreading, and can bring important information about the organization of complex systems, like our brain and society. There are many metrics to quantify the node centrality in networks. Here, we review the main centrality measures and discuss their main features and limitations. The influence of network centrality on epidemic spreading and synchronization is also pointed out in this chapter. Moreover, we present the application of centrality measures to understand the function of complex systems, including biological and cortical networks. Finally, we discuss some perspectives and challenges to generalize centrality measures for multilayer and temporal networks.Comment: Book Chapter in "From nonlinear dynamics to complex systems: A Mathematical modeling approach" by Springe

    Super-resolution community detection for layer-aggregated multilayer networks

    Get PDF
    Applied network science often involves preprocessing network data before applying a network-analysis method, and there is typically a theoretical disconnect between these steps. For example, it is common to aggregate time-varying network data into windows prior to analysis, and the tradeoffs of this preprocessing are not well understood. Focusing on the problem of detecting small communities in multilayer networks, we study the effects of layer aggregation by developing random-matrix theory for modularity matrices associated with layer-aggregated networks with NN nodes and LL layers, which are drawn from an ensemble of Erd\H{o}s-R\'enyi networks. We study phase transitions in which eigenvectors localize onto communities (allowing their detection) and which occur for a given community provided its size surpasses a detectability limit KK^*. When layers are aggregated via a summation, we obtain KO(NL/T)K^*\varpropto \mathcal{O}(\sqrt{NL}/T), where TT is the number of layers across which the community persists. Interestingly, if TT is allowed to vary with LL then summation-based layer aggregation enhances small-community detection even if the community persists across a vanishing fraction of layers, provided that T/LT/L decays more slowly than O(L1/2) \mathcal{O}(L^{-1/2}). Moreover, we find that thresholding the summation can in some cases cause KK^* to decay exponentially, decreasing by orders of magnitude in a phenomenon we call super-resolution community detection. That is, layer aggregation with thresholding is a nonlinear data filter enabling detection of communities that are otherwise too small to detect. Importantly, different thresholds generally enhance the detectability of communities having different properties, illustrating that community detection can be obscured if one analyzes network data using a single threshold.Comment: 11 pages, 8 figure
    corecore