1,911 research outputs found

    Dismantling sparse random graphs

    Full text link
    We consider the number of vertices that must be removed from a graph G in order that the remaining subgraph has no component with more than k vertices. Our principal observation is that, if G is a sparse random graph or a random regular graph on n vertices with n tending to infinity, then the number in question is essentially the same for all values of k such that k tends to infinity but k=o(n).Comment: 7 page

    Fast and simple decycling and dismantling of networks

    Full text link
    Decycling and dismantling of complex networks are underlying many important applications in network science. Recently these two closely related problems were tackled by several heuristic algorithms, simple and considerably sub-optimal, on the one hand, and time-consuming message-passing ones that evaluate single-node marginal probabilities, on the other hand. In this paper we propose a simple and extremely fast algorithm, CoreHD, which recursively removes nodes of the highest degree from the 22-core of the network. CoreHD performs much better than all existing simple algorithms. When applied on real-world networks, it achieves equally good solutions as those obtained by the state-of-art iterative message-passing algorithms at greatly reduced computational cost, suggesting that CoreHD should be the algorithm of choice for many practical purposes

    Generalized Network Dismantling

    Full text link
    Finding the set of nodes, which removed or (de)activated can stop the spread of (dis)information, contain an epidemic or disrupt the functioning of a corrupt/criminal organization is still one of the key challenges in network science. In this paper, we introduce the generalized network dismantling problem, which aims to find the set of nodes that, when removed from a network, results in a network fragmentation into subcritical network components at minimum cost. For unit costs, our formulation becomes equivalent to the standard network dismantling problem. Our non-unit cost generalization allows for the inclusion of topological cost functions related to node centrality and non-topological features such as the price, protection level or even social value of a node. In order to solve this optimization problem, we propose a method, which is based on the spectral properties of a novel node-weighted Laplacian operator. The proposed method is applicable to large-scale networks with millions of nodes. It outperforms current state-of-the-art methods and opens new directions in understanding the vulnerability and robustness of complex systems.Comment: 6 pages, 5 figure

    Underestimated cost of targeted attacks on complex networks

    Full text link
    The robustness of complex networks under targeted attacks is deeply connected to the resilience of complex systems, i.e., the ability to make appropriate responses to the attacks. In this article, we investigated the state-of-the-art targeted node attack algorithms and demonstrate that they become very inefficient when the cost of the attack is taken into consideration. In this paper, we made explicit assumption that the cost of removing a node is proportional to the number of adjacent links that are removed, i.e., higher degree nodes have higher cost. Finally, for the case when it is possible to attack links, we propose a simple and efficient edge removal strategy named Hierarchical Power Iterative Normalized cut (HPI-Ncut).The results on real and artificial networks show that the HPI-Ncut algorithm outperforms all the node removal and link removal attack algorithms when the cost of the attack is taken into consideration. In addition, we show that on sparse networks, the complexity of this hierarchical power iteration edge removal algorithm is only O(nlog2+ϵ(n))O(n\log^{2+\epsilon}(n)).Comment: 14 pages, 7 figure

    Statistical analysis of articulation points in configuration model networks

    Full text link
    An articulation point (AP) in a network is a node whose deletion would split the network component on which it resides into two or more components. APs are vulnerable spots that play an important role in network collapse processes, which may result from node failures, attacks or epidemics. Therefore, the abundance and properties of APs affect the resilience of the network to these collapse scenarios. We present analytical results for the statistical properties of APs in configuration model networks. In order to quantify their abundance, we calculate the probability P(iAP)P(i \in {\rm AP}), that a random node, i, in a configuration model network with P(K=k), is an AP. We also obtain the conditional probability P(iAPk)P(i \in {\rm AP}|k) that a random node of degree k is an AP, and find that high degree nodes are more likely to be APs than low degree nodes. Using Bayes' theorem, we obtain the conditional degree distribution, P(K=kAP)P(K=k|{\rm AP}), over the set of APs and compare it to P(K=k). We propose a new centrality measure based on APs: each node can be characterized by its articulation rank, r, which is the number of components that would be added to the network upon deletion of that node. For nodes which are not APs the articulation rank is r=0r=0, while for APs r1r \ge 1. We obtain a closed form expression for the distribution of articulation ranks, P(R=r). Configuration model networks often exhibit a coexistence between a giant component and finite components. To examine the distinct properties of APs on the giant and on the finite components, we calculate the probabilities presented above separately for the giant and the finite components. We apply these results to ensembles of configuration model networks with a Poisson, exponential and power-law degree distributions. The implications of these results are discussed in the context of common attack scenarios and network dismantling processes.Comment: 53 pages, 16 figures. arXiv admin note: text overlap with arXiv:1804.0333

    Network dismantling

    Get PDF
    We study the network dismantling problem, which consists in determining a minimal set of vertices whose removal leaves the network broken into connected components of sub-extensive size. For a large class of random graphs, this problem is tightly connected to the decycling problem (the removal of vertices leaving the graph acyclic). Exploiting this connection and recent works on epidemic spreading we present precise predictions for the minimal size of a dismantling set in a large random graph with a prescribed (light-tailed) degree distribution. Building on the statistical mechanics perspective we propose a three-stage Min-Sum algorithm for efficiently dismantling networks, including heavy-tailed ones for which the dismantling and decycling problems are not equivalent. We also provide further insights into the dismantling problem concluding that it is an intrinsically collective problem and that optimal dismantling sets cannot be viewed as a collection of individually well performing nodes.Comment: Source code and data can be found at https://github.com/abraunst/decycle

    Using a random road graph model to understand road networks robustness to link failures

    Get PDF
    Disruptions to the transport system have a greater impact on society and the economy now than ever before due to the increased interconnectivity and interdependency of the economic sectors. The ability of transport systems to maintain functionality despite various disturbances (i.e. robustness) is hence of tremendous importance and has been the focus of research seeking to support transport planning, design and management. These approaches and findings may nevertheless be only valid for the specific networks studied. The present study attempts to find universal insights into road networks robustness by exploring the correlation between different network attributes and network robustness to single, multiple, random and targeted link failures. For this purpose, the common properties of road graphs were identified through a literature review. On this basis, the GREREC model was developed to randomly generate a variety of abstract networks presenting the topological and operational characteristics of real-road networks, on which a robustness analysis was performed. This analysis quantifies the difference between the link criticality rankings when only single-link failures are considered as opposed to when multiple-link failures are considered and the difference between the impact of targeted and random attacks. The influence of the network attributes on the network robustness and on these two differences is shown and discussed. Finally, this analysis is also performed on a set of real road networks to validate the results obtained with the artificial networks
    corecore