13 research outputs found

    Beyond Node Degree: Evaluating AS Topology Models

    Get PDF
    This is the accepted version of 'Beyond Node Degree: Evaluating AS Topology Models', archived originally at arXiv:0807.2023v1 [cs.NI] 13 July 2008.Many models have been proposed to generate Internet Autonomous System (AS) topologies, most of which make structural assumptions about the AS graph. In this paper we compare AS topology generation models with several observed AS topologies. In contrast to most previous works, we avoid making assumptions about which topological properties are important to characterize the AS topology. Our analysis shows that, although matching degree-based properties, the existing AS topology generation models fail to capture the complexity of the local interconnection structure between ASs. Furthermore, we use BGP data from multiple vantage points to show that additional measurement locations significantly affect local structure properties, such as clustering and node centrality. Degree-based properties, however, are not notably affected by additional measurements locations. These observations are particularly valid in the core. The shortcomings of AS topology generation models stems from an underestimation of the complexity of the connectivity in the core caused by inappropriate use of BGP data

    Understanding Internet topology: principles, models, and validation

    Get PDF
    Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet

    Compact Routing on Internet-Like Graphs

    Full text link
    The Thorup-Zwick (TZ) routing scheme is the first generic stretch-3 routing scheme delivering a nearly optimal local memory upper bound. Using both direct analysis and simulation, we calculate the stretch distribution of this routing scheme on random graphs with power-law node degree distributions, PkkγP_k \sim k^{-\gamma}. We find that the average stretch is very low and virtually independent of γ\gamma. In particular, for the Internet interdomain graph, γ2.1\gamma \sim 2.1, the average stretch is around 1.1, with up to 70% of paths being shortest. As the network grows, the average stretch slowly decreases. The routing table is very small, too. It is well below its upper bounds, and its size is around 50 records for 10410^4-node networks. Furthermore, we find that both the average shortest path length (i.e. distance) dˉ\bar{d} and width of the distance distribution σ\sigma observed in the real Internet inter-AS graph have values that are very close to the minimums of the average stretch in the dˉ\bar{d}- and σ\sigma-directions. This leads us to the discovery of a unique critical quasi-stationary point of the average TZ stretch as a function of dˉ\bar{d} and σ\sigma. The Internet distance distribution is located in a close neighborhood of this point. This observation suggests the analytical structure of the average stretch function may be an indirect indicator of some hidden optimization criteria influencing the Internet's interdomain topology evolution.Comment: 29 pages, 16 figure

    Topological characteristics of IP networks

    Get PDF
    Topological analysis of the Internet is needed for developments on network planning, optimal routing algorithms, failure detection measures, and understanding business models. Accurate measurement, inference and modelling techniques are fundamental to Internet topology research. A requirement towards achieving such goals is the measurements of network topologies at different levels of granularity. In this work, I start by studying techniques for inferring, modelling, and generating Internet topologies at both the router and administrative levels. I also compare the mathematical models that are used to characterise various topologies and the generation tools based on them. Many topological models have been proposed to generate Internet Autonomous System(AS) topologies. I use an extensive set of measures and innovative methodologies to compare AS topology generation models with several observed AS topologies. This analysis shows that the existing AS topology generation models fail to capture important characteristics, such as the complexity of the local interconnection structure between ASes. Furthermore, I use routing data from multiple vantage points to show that using additional measurement points significantly affect our observations about local structural properties, such as clustering and node centrality. Degree-based properties, however, are not notably affected by additional measurements locations. The shortcomings of AS topology generation models stems from an underestimation of the complexity of the connectivity in the Internet and biases of measurement techniques. An increasing number of synthetic topology generators are available, each claiming to produce representative Internet topologies. Every generator has its own parameters, allowing the user to generate topologies with different characteristics. However, there exist no clear guidelines on tuning the value of these parameters in order to obtain a topology with specific characteristics. I propose a method which allows optimal parameters of a model to be estimated for a given target topology. The optimisation is performed using the weighted spectral distribution metric, which simultaneously takes into account many the properties of a graph. In order to understand the dynamics of the Internet, I study the evolution of the AS topology over a period of seven years. To understand the structural changes in the topology, I use the weighted spectral distribution as this metric reveals differences in the hierarchical structure of two graphs. The results indicate that the Internet is changing from a strongly customer-provider oriented, disassortative network, to a soft-hierarchical, peering-oriented, assortative network. This change is indicative of evolving business relationships amongst organisations

    Analysis of Social Network Measures with Respect to Structural Properties of Networks

    Get PDF
    Social Network Analysis (SNA), the study of social interactions within a group, spans many different fields of study, ranging from psychology to biology to information sciences. Over the past half century, many analysts outside of the social science field have taken SNA concepts and theories and have applied them to an array of networks in the hope of formulating mathematical descriptions of the relations within the network of interest. More than 50 measures of networks have been identified across these fields; however, little research has examined the findings of these measures for possible relationships. This thesis tests a set of widely accepted SNA measures for correlation and redundancies with respect to the most accepted network structural properties: size, clustering coefficients, and scale-free parameters. The goal of the thesis is to investigate the SNA measures\u27 ability to discriminate and identify different actors in a network. As a result, the study not only identifies high correlation amongst many of the measures, it also aids analysts in identifying which measure best suits a network with specific structural properties, and the measure\u27s efficiency for a given analysis goal

    Treewidth and Hyperbolicity of the Internet

    Get PDF
    International audienceWe study the measurement of the Internet according to two graph parameters: treewidth and hyperbolicity. Both tell how far from a tree a graph is. They are computed from snapshots of the Internet released by CAIDA, DIMES, AQUALAB, UCLA, Rocketfuel and Strasbourg University, at the AS or at the router level. On the one hand, the treewidth of the Internet appears to be quite large and being far from a tree with that respect, reflecting some high degree of connectivity. This proves the existence of a well linked core in the Internet. On the other hand, the hyperbolicity (as a graph parameter) appears to be very low, reflecting a tree-like structure with respect to distances. Additionally, we compute the treewidth and hyperbolicity obtained for classical Internet models and compare with the snapshots

    A CLUSTERING-BASED SELECTIVE PROBING FRAMEWORK TO SUPPORT INTERNET QUALITY OF SERVICE ROUTING

    Get PDF
    The advent of the multimedia applications has triggered widespread interest in QoS supports. Two Internet-based QoS frameworks have been proposed: Integrated Services (IntServ) and Differentiated Services (DiffServ). IntServ supports service guarantees on a per-flow basis. The framework, however, is not scalable due to the fact that routers have to maintain a large amount of state information for each supported flow. DiffServ was proposed as an alternate solution to address the lack of scalability of the IntServ framework. DiffServ uses class-based service differentiation to achieve aggregate support for QoS requirements. This approach eliminates the need to maintain per-flow states on a hop-by-hop basis and reduces considerably the overhead routers incur in forwarding traffic.Both IntServ and DiffServ frameworks focus on packet scheduling. As such, they decouple routing from QoS provisioning. This typically results in inefficient routes, thereby limiting the ability of the network to support QoS requirements and to manage resources efficiently. The goal of this thesis is to address this shortcoming. We propose a scalable QoS routing framework to identify and select paths that are very likely to meet the QoS requirements of the underlying applications. The tenet of our approach is based on seamlessly integrating routing into the DiffServ framework to extend its ability to support QoS requirements. Scalability is achieved using selective probing and clustering to reduce signaling and routers overhead.The major contributions of this thesis are as follows: First, we propose a scalable routing architecture that supports QoS requirements. The architecture seamlessly integrates the QoS traffic requirements of the underlying applications into a DiffServ framework. Second, we propose a new delay-based clustering method, referred to as d-median. The proposed clustering method groups Internet nodes into clusters, whereby nodes in the same cluster exhibit equivalent delay characteristics. Each cluster is represented by anchor node. Anchors use selective probing to estimate QoS parameters and select appropriate paths for traffic forwarding. A thorough study to evaluate the performance of the proposed d-median clustering algorithm is conducted. The results of the study show that, for power-law graphs such as the Internet, the d-median clustering based approach outperforms the set covering method commonly proposed in the literature. The study shows that the widely used clustering methods, such as set covering or k-median, are inadequate to capture the balance between cluster sizes and the number of clusters. The results of the study also show that the proposed clustering method, applied to power-law graphs, is robust to changes in size and delay distribution of the network. Finally, the results suggest that the delay bound input parameter of the d-median scheme should be no less than 1 and no more than 4 times of the average delay per one hop of the network. This is mostly due to the weak hierarchy of the Internet resulting from its power-law structure and the prevalence of the small-world property

    On the Validity of Flow-level TCP Network Models for Grid and Cloud Simulations

    Get PDF
    International audienceResearchers in the area of distributed computing conduct many of their experiments in simulation. While packet-level simulation is widely used to study network protocols, it can be too costly to simulate network communications for large-scale systems and applications. The alternative is to simulate the network based on less costly flow-level models. Surprisingly, in the literature, validation of these flow-level models is at best a mere verification for a few simple cases. Consequently, although distributed computing simulators are often used, their ability to produce scientifically meaningful results is in doubt. In this work we focus on the validation of state-of-the-art flow-level network models of TCP communication, via comparison to packet-level simulation. While it is straightforward to show cases in which previously proposed models lead to good results, instead we systematically seek cases that lead to invalid results. Careful analysis of these cases reveal fundamental flaws and also suggest improvements. One contribution of this work is that these improvements lead to a new model that, while far from being perfect, improves upon all previously proposed models. A more important contribution, perhaps, is provided by the pitfalls and unexpected behaviors encountered in this work, leading to a number of enlightening lessons. In particular, this work shows that model validation cannot be achieved solely by exhibiting (possibly many) ''good cases.'' Confidence in the quality of a model can only be strengthened through an invalidation approach that attempts to prove the model wrong
    corecore