86 research outputs found

    Topological characteristics of IP networks

    Get PDF
    Topological analysis of the Internet is needed for developments on network planning, optimal routing algorithms, failure detection measures, and understanding business models. Accurate measurement, inference and modelling techniques are fundamental to Internet topology research. A requirement towards achieving such goals is the measurements of network topologies at different levels of granularity. In this work, I start by studying techniques for inferring, modelling, and generating Internet topologies at both the router and administrative levels. I also compare the mathematical models that are used to characterise various topologies and the generation tools based on them. Many topological models have been proposed to generate Internet Autonomous System(AS) topologies. I use an extensive set of measures and innovative methodologies to compare AS topology generation models with several observed AS topologies. This analysis shows that the existing AS topology generation models fail to capture important characteristics, such as the complexity of the local interconnection structure between ASes. Furthermore, I use routing data from multiple vantage points to show that using additional measurement points significantly affect our observations about local structural properties, such as clustering and node centrality. Degree-based properties, however, are not notably affected by additional measurements locations. The shortcomings of AS topology generation models stems from an underestimation of the complexity of the connectivity in the Internet and biases of measurement techniques. An increasing number of synthetic topology generators are available, each claiming to produce representative Internet topologies. Every generator has its own parameters, allowing the user to generate topologies with different characteristics. However, there exist no clear guidelines on tuning the value of these parameters in order to obtain a topology with specific characteristics. I propose a method which allows optimal parameters of a model to be estimated for a given target topology. The optimisation is performed using the weighted spectral distribution metric, which simultaneously takes into account many the properties of a graph. In order to understand the dynamics of the Internet, I study the evolution of the AS topology over a period of seven years. To understand the structural changes in the topology, I use the weighted spectral distribution as this metric reveals differences in the hierarchical structure of two graphs. The results indicate that the Internet is changing from a strongly customer-provider oriented, disassortative network, to a soft-hierarchical, peering-oriented, assortative network. This change is indicative of evolving business relationships amongst organisations

    Exploring DSCP modification pathologies in the internet

    Get PDF
    This work is funded by the European Unions Horizon 2020 research and innovation programme under grant agreement no. 644399 (MONROE) through the Open Call and grant agreement no. 644334 (NEAT). The views expressed are solely those of the author(s). The European Commission is not responsible for any use that may be made of that information.Peer reviewedPublisher PD

    Compact routing for the future internet

    Get PDF
    The Internet relies on its inter-domain routing system to allow data transfer between any two endpoints regardless of where they are located. This routing system currently uses a shortest path routing algorithm (modified by local policy constraints) called the Border Gateway Protocol. The massive growth of the Internet has led to large routing tables that will continue to grow. This will present a serious engineering challenge for router designers in the long-term, rendering state (routing table) growth at this pace unsustainable. There are various short-term engineering solutions that may slow the growth of the inter-domain routing tables, at the expense of increasing the complexity of the network. In addition, some of these require manual configuration, or introduce additional points of failure within the network. These solutions may give an incremental, constant factor, improvement. However, we know from previous work that all shortest path routing algorithms require forwarding state that grows linearly with the size of the network in the worst case. Rather than attempt to sustain inter-domain routing through a shortest path routing algorithm, compact routing algorithms exist that guarantee worst-case sub-linear state requirements at all nodes by allowing an upper-bound on path length relative to the theoretical shortest path, known as path stretch. Previous work has shown the promise of these algorithms when applied to synthetic graphs with similar properties to the known Internet graph, but they haven't been studied in-depth on Internet topologies derived from real data. In this dissertation, I demonstrate the consistently strong performance of these compact routing algorithms for inter-domain routing by performing a longitudinal study of two compact routing algorithms on the Internet Autonomous System (AS) graph over time. I then show, using the k-cores graph decomposition algorithm, that the structurally important nodes in the AS graph are highly stable over time. This property makes these nodes suitable for use as the "landmark" nodes used by the most stable of the compact routing algorithms evaluated, and the use of these nodes shows similar strong routing performance. Finally, I present a decentralised compact routing algorithm for dynamic graphs, and present state requirements and message overheads on AS graphs using realistic simulation inputs. To allow the continued long-term growth of Internet routing state, an alternative routing architecture may be required. The use of the compact routing algorithms presented in this dissertation offer promise for a scalable future Internet routing system

    Effective Wide-Area Network Performance Monitoring and Diagnosis from End Systems.

    Full text link
    The quality of all network application services running on today’s Internet heavily depends on the performance assurance offered by the Internet Service Providers (ISPs). Large network providers inside the core of the Internet are instrumental in determining the network properties of their transit services due to their wide-area coverage, especially in the presence of the increasingly deployed real-time sensitive network applications. The end-to-end performance of distributed applications and network services are susceptible to network disruptions in ISP networks. Given the scale and complexity of the Internet, failures and performance problems can occur in different ISP networks. It is important to efficiently identify and proactively respond to potential problems to prevent large damage. Existing work to monitor and diagnose network disruptions are ISP-centric, which relying on each ISP to set up monitors and diagnose within its network. This approach is limited as ISPs are unwilling to revealing such data to the public. My dissertation research developed a light-weight active monitoring system to monitor, diagnose and react to network disruptions by purely using end hosts, which can help customers assess the compliance of their service-level agreements (SLAs). This thesis studies research problems from three indispensable aspects: efficient monitoring, accurate diagnosis, and effective mitigation. This is an essential step towards accountability and fairness on the Internet. To fully understand the limitation of relying on ISP data, this thesis first studies and demonstrates the monitor selection’s great impact on the monitoring quality and the interpretation of the results. Motivated by the limitation of ISP-centric approach, this thesis demonstrates two techniques to diagnose two types of finegrained causes accurately and scalably by exploring information across routing and data planes, as well as sharing information among multiple locations collaboratively. Finally, we demonstrate usefulness of the monitoring and diagnosis results with two mitigation applications. The first application is short-term prevention of avoiding choosing the problematic route by exploring the predictability from history. The second application is to scalably compare multiple ISPs across four important performance metrics, namely reachability, loss rate, latency, and path diversity completely from end systems without any ISP cooperation.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64770/1/wingying_1.pd

    Improving Network Performance Through Endpoint Diagnosis And Multipath Communications

    Get PDF
    Components of networks, and by extension the internet can fail. It is, therefore, important to find the points of failure and resolve existing issues as quickly as possible. Resolution, however, takes time and its important to maintain high quality of service (QoS) for existing clients while it is in progress. In this work, our goal is to provide clients with means of avoiding failures if/when possible to maintain high QoS while enabling them to assist in the diagnosis process to speed up the time to recovery. Fixing failures relies on first detecting that there is one and then identifying where it occurred so as to be able to remedy it. We take a two-step approach in our solution. First, we identify the entity (Client, Server, Network) responsible for the failure. Next, if a failure is identified as network related additional algorithms are triggered to detect the device responsible. To achieve the first step, we revisit the question: how much can you infer about a failure using TCP statistics collected at one of the endpoints in a connection? Using an agent that captures TCP statistics at one of the end points we devise a classification algorithm that identifies the root cause of failures. Using insights derived from this classification algorithm we identify dominant TCP metrics that indicate where/why problems occur. If/when a failure is identified as a network related problem, the second step is triggered, where the algorithm uses additional information that is collected from ``failed\u27\u27 connections to identify the device which resulted in the failure. Failures are also disruptive to user\u27s performance. Resolution may take time. Therefore, it is important to be able to shield clients from their effects as much as possible. One option for avoiding problems resulting from failures is to rely on multiple paths (they are unlikely to go bad at the same time). The use of multiple paths involves both selecting paths (routing) and using them effectively. The second part of this thesis explores the efficacy of multipath communication in such situations. It is expected that multi-path communications have monetary implications for the ISP\u27s and content providers. Our solution, therefore, aims to minimize such costs to the content providers while significantly improving user performance
    • …
    corecore