5 research outputs found

    The Robustness of Scale-free Networks Under Edge Attacks with the Quantitative Analysis

    Full text link
    Previous studies on the invulnerability of scale-free networks under edge attacks supported the conclusion that scale-free networks would be fragile under selective attacks. However, these studies are based on qualitative methods with obscure definitions on the robustness. This paper therefore employs a quantitative method to analyze the invulnerability of the scale-free networks, and uses four scale-free networks as the experimental group and four random networks as the control group. The experimental results show that some scale-free networks are robust under selective edge attacks, different to previous studies. Thus, this paper analyzes the difference between the experimental results and previous studies, and suggests reasonable explanations

    Modelling Multi-Trait Scale-free Networks by Optimization

    Full text link
    Recently, one paper in Nature(Papadopoulos, 2012) raised an old debate on the origin of the scale-free property of complex networks, which focuses on whether the scale-free property origins from the optimization or not. Because the real-world complex networks often have multiple traits, any explanation on the scale-free property of complex networks should be capable of explaining the other traits as well. This paper proposed a framework which can model multi-trait scale-free networks based on optimization, and used three examples to demonstrate its effectiveness. The results suggested that the optimization is a more generalized explanation because it can not only explain the origin of the scale-free property, but also the origin of the other traits in a uniform way. This paper provides a universal method to get ideal networks for the researches such as epidemic spreading and synchronization on complex networks

    A quantitative method for determining the robustness of complex networks

    Full text link
    Most current studies estimate the invulnerability of complex networks using a qualitative method that analyzes the inaccurate decay rate of network efficiency. This method results in confusion over the invulnerability of various types of complex networks. By normalizing network efficiency and defining a baseline, this paper defines the invulnerability index as the integral of the difference between the normalized network efficiency curve and the baseline. This quantitative method seeks to establish a benchmark for the robustness and fragility of networks and to measure network invulnerability under both edge and node attacks. To validate the reliability of the proposed method, three small-world networks were selected as test beds. The simulation results indicate that the proposed invulnerability index can effectively and accurately quantify network resilience. The index should provide a valuable reference for determining network invulnerability in future research

    A simple model clarifies the complicated relationships of complex networks

    Full text link
    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation

    Ranking the Importance of Nodes of Complex Networks by the Equivalence Classes Approach

    Full text link
    Identifying the importance of nodes of complex networks is of interest to the research of Social Networks, Biological Networks etc.. Current researchers have proposed several measures or algorithms, such as betweenness, PageRank and HITS etc., to identify the node importance. However, these measures are based on different aspects of properties of nodes, and often conflict with the others. A reasonable, fair standard is needed for evaluating and comparing these algorithms. This paper develops a framework as the standard for ranking the importance of nodes. Four intuitive rules are suggested to measure the node importance, and the equivalence classes approach is employed to resolve the conflicts and aggregate the results of the rules. To quantitatively compare the algorithms, the performance indicators are also proposed based on a similarity measure. Three widely used real-world networks are used as the test-beds. The experimental results illustrate the feasibility of this framework and show that both algorithms, PageRank and HITS, perform well with bias when dealing with the tested networks. Furthermore, this paper uses the proposed approach to analyze the structure of the Internet, and draws out the kernel of the Internet with dense links
    corecore