24 research outputs found

    Fast and simple decycling and dismantling of networks

    Full text link
    Decycling and dismantling of complex networks are underlying many important applications in network science. Recently these two closely related problems were tackled by several heuristic algorithms, simple and considerably sub-optimal, on the one hand, and time-consuming message-passing ones that evaluate single-node marginal probabilities, on the other hand. In this paper we propose a simple and extremely fast algorithm, CoreHD, which recursively removes nodes of the highest degree from the 22-core of the network. CoreHD performs much better than all existing simple algorithms. When applied on real-world networks, it achieves equally good solutions as those obtained by the state-of-art iterative message-passing algorithms at greatly reduced computational cost, suggesting that CoreHD should be the algorithm of choice for many practical purposes

    Compressing networks with super nodes

    Full text link
    Community detection is a commonly used technique for identifying groups in a network based on similarities in connectivity patterns. To facilitate community detection in large networks, we recast the network to be partitioned into a smaller network of 'super nodes', each super node comprising one or more nodes in the original network. To define the seeds of our super nodes, we apply the 'CoreHD' ranking from dismantling and decycling. We test our approach through the analysis of two common methods for community detection: modularity maximization with the Louvain algorithm and maximum likelihood optimization for fitting a stochastic block model. Our results highlight that applying community detection to the compressed network of super nodes is significantly faster while successfully producing partitions that are more aligned with the local network connectivity, more stable across multiple (stochastic) runs within and between community detection algorithms, and overlap well with the results obtained using the full network

    Network dismantling

    Get PDF
    We study the network dismantling problem, which consists in determining a minimal set of vertices whose removal leaves the network broken into connected components of sub-extensive size. For a large class of random graphs, this problem is tightly connected to the decycling problem (the removal of vertices leaving the graph acyclic). Exploiting this connection and recent works on epidemic spreading we present precise predictions for the minimal size of a dismantling set in a large random graph with a prescribed (light-tailed) degree distribution. Building on the statistical mechanics perspective we propose a three-stage Min-Sum algorithm for efficiently dismantling networks, including heavy-tailed ones for which the dismantling and decycling problems are not equivalent. We also provide further insights into the dismantling problem concluding that it is an intrinsically collective problem and that optimal dismantling sets cannot be viewed as a collection of individually well performing nodes.Comment: Source code and data can be found at https://github.com/abraunst/decycle

    Community Detection Boosts Network Dismantling on Real-World Networks

    Get PDF
    Network dismantling techniques have gained increasing interest during the last years caused by the need for protecting and strengthening critical infrastructure systems in our society. We show that communities play a critical role in dismantling, given their inherent property of separating a network into strongly and weakly connected parts. The process of community-based dismantling depends on several design factors, including the choice of community detection method, community cut strategy, and inter-community node selection. We formalize the problem of community attacks to networks, identify critical design decisions for such methods, and perform a comprehensive empirical evaluation with respect to effectiveness and efficiency criteria on a set of more than 40 community-based network dismantling methods. We compare our results to state-of-the-art network dismantling, including collective influence, articulation points, as well as network decycling. We show that community-based network dismantling significantly outperforms existing techniques in terms of solution quality and computation time in the vast majority of real-world networks, while existing techniques mainly excel on model networks (ER, BA) mostly. We additionally show that the scalability of community-based dismantling opens new doors towards the efficient analysis of large real-world networks.We acknowledge financial support from FEDER/Ministerio de Ciencia, Innovación y Universidades Agencia Estatal de Investigación/ SuMaEco Project (RTI2018-095441-B-C22) and the María de Maeztu Program for Units of Excellence in R&D (No. MDM-2017-0711). D.R.-R. also acknowledges the Fellowship No. BES-2016-076264 under the FPI program of MINECO, Spain.Peer reviewe

    Generalized Network Dismantling

    Full text link
    Finding the set of nodes, which removed or (de)activated can stop the spread of (dis)information, contain an epidemic or disrupt the functioning of a corrupt/criminal organization is still one of the key challenges in network science. In this paper, we introduce the generalized network dismantling problem, which aims to find the set of nodes that, when removed from a network, results in a network fragmentation into subcritical network components at minimum cost. For unit costs, our formulation becomes equivalent to the standard network dismantling problem. Our non-unit cost generalization allows for the inclusion of topological cost functions related to node centrality and non-topological features such as the price, protection level or even social value of a node. In order to solve this optimization problem, we propose a method, which is based on the spectral properties of a novel node-weighted Laplacian operator. The proposed method is applicable to large-scale networks with millions of nodes. It outperforms current state-of-the-art methods and opens new directions in understanding the vulnerability and robustness of complex systems.Comment: 6 pages, 5 figure

    Network higher-order structure dismantling

    Full text link
    Diverse higher-order structures, foundational for supporting a network's "meta-functions", play a vital role in structure, functionality, and the emergence of complex dynamics. Nevertheless, the problem of dismantling them has been consistently overlooked. In this paper, we introduce the concept of dismantling higher-order structures, with the objective of disrupting not only network connectivity but also eradicating all higher-order structures in each branch, thereby ensuring thorough functional paralysis. Given the diversity and unknown specifics of higher-order structures, identifying and targeting them individually is not practical or even feasible. Fortunately, their close association with k-cores arises from their internal high connectivity. Thus, we transform higher-order structure measurement into measurements on k-cores with corresponding orders. Furthermore, we propose the Belief Propagation-guided High-order Dismantling (BPDH) algorithm, minimizing dismantling costs while achieving maximal disruption to connectivity and higher-order structures, ultimately converting the network into a forest. BPDH exhibits the explosive vulnerability of network higher-order structures, counterintuitively showcasing decreasing dismantling costs with increasing structural complexity. Our findings offer a novel approach for dismantling malignant networks, emphasizing the substantial challenges inherent in safeguarding against such malicious attacks.Comment: 14 pages, 5 figures, 2 table

    Statistical analysis of articulation points in configuration model networks

    Full text link
    An articulation point (AP) in a network is a node whose deletion would split the network component on which it resides into two or more components. APs are vulnerable spots that play an important role in network collapse processes, which may result from node failures, attacks or epidemics. Therefore, the abundance and properties of APs affect the resilience of the network to these collapse scenarios. We present analytical results for the statistical properties of APs in configuration model networks. In order to quantify their abundance, we calculate the probability P(iAP)P(i \in {\rm AP}), that a random node, i, in a configuration model network with P(K=k), is an AP. We also obtain the conditional probability P(iAPk)P(i \in {\rm AP}|k) that a random node of degree k is an AP, and find that high degree nodes are more likely to be APs than low degree nodes. Using Bayes' theorem, we obtain the conditional degree distribution, P(K=kAP)P(K=k|{\rm AP}), over the set of APs and compare it to P(K=k). We propose a new centrality measure based on APs: each node can be characterized by its articulation rank, r, which is the number of components that would be added to the network upon deletion of that node. For nodes which are not APs the articulation rank is r=0r=0, while for APs r1r \ge 1. We obtain a closed form expression for the distribution of articulation ranks, P(R=r). Configuration model networks often exhibit a coexistence between a giant component and finite components. To examine the distinct properties of APs on the giant and on the finite components, we calculate the probabilities presented above separately for the giant and the finite components. We apply these results to ensembles of configuration model networks with a Poisson, exponential and power-law degree distributions. The implications of these results are discussed in the context of common attack scenarios and network dismantling processes.Comment: 53 pages, 16 figures. arXiv admin note: text overlap with arXiv:1804.0333

    Underestimated cost of targeted attacks on complex networks

    Full text link
    The robustness of complex networks under targeted attacks is deeply connected to the resilience of complex systems, i.e., the ability to make appropriate responses to the attacks. In this article, we investigated the state-of-the-art targeted node attack algorithms and demonstrate that they become very inefficient when the cost of the attack is taken into consideration. In this paper, we made explicit assumption that the cost of removing a node is proportional to the number of adjacent links that are removed, i.e., higher degree nodes have higher cost. Finally, for the case when it is possible to attack links, we propose a simple and efficient edge removal strategy named Hierarchical Power Iterative Normalized cut (HPI-Ncut).The results on real and artificial networks show that the HPI-Ncut algorithm outperforms all the node removal and link removal attack algorithms when the cost of the attack is taken into consideration. In addition, we show that on sparse networks, the complexity of this hierarchical power iteration edge removal algorithm is only O(nlog2+ϵ(n))O(n\log^{2+\epsilon}(n)).Comment: 14 pages, 7 figure
    corecore