690 research outputs found

    Bicomponents and the robustness of networks to failure

    Full text link
    A common definition of a robust connection between two nodes in a network such as a communication network is that there should be at least two independent paths connecting them, so that the failure of no single node in the network causes them to become disconnected. This definition leads us naturally to consider bicomponents, subnetworks in which every node has a robust connection of this kind to every other. Here we study bicomponents in both real and model networks using a combination of exact analytic techniques and numerical methods. We show that standard network models predict there to be essentially no small bicomponents in most networks, but there may be a giant bicomponent, whose presence coincides with the presence of the ordinary giant component, and we find that real networks seem by and large to follow this pattern, although there are some interesting exceptions. We study the size of the giant bicomponent as nodes in the network fail, using a specially developed computer algorithm based on data trees, and find in some cases that our networks are quite robust to failure, with large bicomponents persisting until almost all vertices have been removed.Comment: 5 pages, 1 figure, 1 tabl

    Super edge-connectivity and matching preclusion of data center networks

    Full text link
    Edge-connectivity is a classic measure for reliability of a network in the presence of edge failures. kk-restricted edge-connectivity is one of the refined indicators for fault tolerance of large networks. Matching preclusion and conditional matching preclusion are two important measures for the robustness of networks in edge fault scenario. In this paper, we show that the DCell network Dk,nD_{k,n} is super-λ\lambda for k≥2k\geq2 and n≥2n\geq2, super-λ2\lambda_2 for k≥3k\geq3 and n≥2n\geq2, or k=2k=2 and n=2n=2, and super-λ3\lambda_3 for k≥4k\geq4 and n≥3n\geq3. Moreover, as an application of kk-restricted edge-connectivity, we study the matching preclusion number and conditional matching preclusion number, and characterize the corresponding optimal solutions of Dk,nD_{k,n}. In particular, we have shown that D1,nD_{1,n} is isomorphic to the (n,k)(n,k)-star graph Sn+1,2S_{n+1,2} for n≥2n\geq2.Comment: 20 pages, 1 figur

    Dynamic Effects Increasing Network Vulnerability to Cascading Failures

    Full text link
    We study cascading failures in networks using a dynamical flow model based on simple conservation and distribution laws to investigate the impact of transient dynamics caused by the rebalancing of loads after an initial network failure (triggering event). It is found that considering the flow dynamics may imply reduced network robustness compared to previous static overload failure models. This is due to the transient oscillations or overshooting in the loads, when the flow dynamics adjusts to the new (remaining) network structure. We obtain {\em upper} and {\em lower} limits to network robustness, and it is shown that {\it two} time scales τ\tau and τ0\tau_0, defined by the network dynamics, are important to consider prior to accurately addressing network robustness or vulnerability. The robustness of networks showing cascading failures is generally determined by a complex interplay between the network topology and flow dynamics, where the ratio χ=τ/τ0\chi=\tau/\tau_0 determines the relative role of the two of them.Comment: 4 pages Latex, 4 figure

    Threshold for the Outbreak of Cascading Failures in Degree-degree Uncorrelated Networks

    Get PDF
    In complex networks, the failure of one or very few nodes may cause cascading failures. When this dynamical process stops in steady state, the size of the giant component formed by remaining un-failed nodes can be used to measure the severity of cascading failures, which is critically important for estimating the robustness of networks. In this paper, we provide a cascade of overload failure model with local load sharing mechanism, and then explore the threshold of node capacity when the large-scale cascading failures happen and un-failed nodes in steady state cannot connect to each other to form a large connected sub-network. We get the theoretical derivation of this threshold in degree-degree uncorrelated networks, and validate the effectiveness of this method in simulation. This threshold provide us a guidance to improve the network robustness under the premise of limited capacity resource when creating a network and assigning load. Therefore, this threshold is useful and important to analyze the robustness of networks.Comment: 11 pages, 4 figure

    Optimization of Robustness of Complex Networks

    Full text link
    Networks with a given degree distribution may be very resilient to one type of failure or attack but not to another. The goal of this work is to determine network design guidelines which maximize the robustness of networks to both random failure and intentional attack while keeping the cost of the network (which we take to be the average number of links per node) constant. We find optimal parameters for: (i) scale free networks having degree distributions with a single power-law regime, (ii) networks having degree distributions with two power-law regimes, and (iii) networks described by degree distributions containing two peaks. Of these various kinds of distributions we find that the optimal network design is one in which all but one of the nodes have the same degree, k1k_1 (close to the average number of links per node), and one node is of very large degree, k2∼N2/3k_2 \sim N^{2/3}, where NN is the number of nodes in the network.Comment: Accepted for publication in European Physical Journal
    • …
    corecore