20 research outputs found

    Beyond Node Degree: Evaluating AS Topology Models

    Get PDF
    This is the accepted version of 'Beyond Node Degree: Evaluating AS Topology Models', archived originally at arXiv:0807.2023v1 [cs.NI] 13 July 2008.Many models have been proposed to generate Internet Autonomous System (AS) topologies, most of which make structural assumptions about the AS graph. In this paper we compare AS topology generation models with several observed AS topologies. In contrast to most previous works, we avoid making assumptions about which topological properties are important to characterize the AS topology. Our analysis shows that, although matching degree-based properties, the existing AS topology generation models fail to capture the complexity of the local interconnection structure between ASs. Furthermore, we use BGP data from multiple vantage points to show that additional measurement locations significantly affect local structure properties, such as clustering and node centrality. Degree-based properties, however, are not notably affected by additional measurements locations. These observations are particularly valid in the core. The shortcomings of AS topology generation models stems from an underestimation of the complexity of the connectivity in the core caused by inappropriate use of BGP data

    Quantitative analysis of incorrectly-configured bogon-filter detection

    Get PDF
    Copyright © 2008 IEEENewly announced IP addresses (from previously unused IP blocks) are often unreachable. It is common for network operators to filter out address space which is known to be unallocated (“bogon” addresses). However, as allocated address space changes over time, these bogons might become legitimately announced prefixes. Unfortunately, some ISPs still do not configure their bogon filters via lists published by the Regional Internet Registries (RIRs). Instead, they choose to manually configure filters. Therefore it would be desirable to test whether filters block legitimate address space before it is allocated to ISPs and/or end users. Previous work has presented a methodology that aims at detecting such wrongly configured filters, so that ISPs can be contacted and asked to update their filters. This paper extends the methodology by providing a more formal algorithm for finding such filters, and the paper quantitatively assesses the performance of this methodology.Jon Arnold, Olaf Maennel, Ashley Flavel, Jeremy McMahon, Matthew Rougha

    CleanBGP: Verifying the consistency of BGP data

    Get PDF
    Copyright © 2008 IEEEBGP data contains artifacts introduced by the measurement infrastructure which can substantially affect analysis. This is especially important in operational systems where "crying wolf" will result in an operator ignoring alarms. In this paper, we investigate the causes of measurement artifacts in BGP data - cross-checking and using properties of the data to infer the presence of an artifact and minimize its impact. We have developed a prototype tool, CleanBGP, which detects and corrects the effects of artifacts in BGP data, which we believe should be used prior to the analysis of such data. CleanBGP provides the user with an understanding of the artifacts present, a mechanism to remove their effects, and consequently the limitations of results can be fully quantified.Ashley Flavel, Olaf Maennely, Belinda Chiera, Matthew Roughan and Nigel Bea

    Bigfoot, Sasquatch, the Yeti and other missing links: what we don't know about the AS graph

    No full text
    Copyright © 2008 ACMStudy of the Internet's high-level structure has for some time intrigued scientists. The AS-graph (showing interconnections between Autonomous Systems) has been measured, studied, modelled and discussed in many papers over the last decade. However, the quality of the measurement data has always been in question. It is by now well known that most measurements of the AS-graph are missing some set of links. Many efforts have been undertaken to correct this, primarily by increasing the set of measurements, but the issue remains: how much is enough? When will we know that we have enough measurements to be sure we can see all (or almost all) of the links. This paper aims to address the problem of estimating how many links are missing from our measurements. We use techniques pioneered in biostatistics and epidemiology for estimating the size of populations (for instance of fish or disease carriers). It is rarely possible to observe entire populations, and so sampling techniques are used. We extend those techniques to the domain of the AS-graph. The key difference between our work and the biological literature is that all links are not the same, and so we build a stratified model and specify an EM algorithm for estimating its parameters. Our estimates suggest that a very significant number of links (many of thousands) are missing from standard route monitor measurements of the AS-graph. Finally, we use the model to derive the number of monitors that would be needed to see a complete AS-graph with high-probability. We estimate that 700 route monitors would see 99.9% of links

    Filter-Based RFD: Can We Stabilize Network Without Sacrificing Reachability Too Much?

    No full text

    Internet optometry: Assessing the broken glasses in internet reachability

    No full text
    Reachability is thought of as the most basic service provided by today's Internet. Unfortunately, this does not imply that the community has a deep understanding of it. Researchers and operators rely on two views of reachability: control/routing- and data-plane measurements, but both types of measurements suffer from biases and limitations. In this paper, we illustrate some of these biases, and show how to design controlled experiments which allow us to "see" through the limitations of previous measurement techniques. For example, we discover the extent of default routing and its impact on reachability. This explains some of the previous unexpected results from studies that compared control- and data-plane measurements. However, not all limitations of visibility given by routing and probing tools can be compensated for by methodological improvements. We will show in this paper, that some of the limitations can be carefully addressed when designing an experiment, e.g. not seeing the reverse path taken by a probe can be partly compensated for by our methodology, called dual probing. However, compensating for other biases through more measurements may not always be possible. Therefore, calibration of expectations and checks of assumptions are critical when conducting measurements that aim at making conclusions about topological properties of the Internet.Randy Bush, Olaf Maennel, Matthew Roughan, Steve Uhli

    Modeling BGP table fluctuations

    No full text
    In this paper we develop a mathematical model to capture BGP table fluctuations. This provides the necessary foundations to study short- and long-term routing table growth. We reason that this growth is operationally critical for network administrators who need to gauge the amount of memory to install in routers as well as being a potential deciding factor in determining when the Internet community will run out of IPv4 address space. We demonstrate that a simple model using a simple arrival process with heavy tailed service times is sufficient to reproduce BGP dynamics including the “spiky” characteristics of the original trace data. We derive our model using a classification technique that separates newly added or removed prefixes, short-term spikes and long-term stable prefixes. We develop a model of non-stable prefixes and show it has similar properties in their magnitude and duration to those observed in recorded BGP traces.Ashley Flavel, Matthew Roughan, Nigel Bean and Olaf Maenne

    On the predictive power of shortest-path weight inference

    No full text
    Copyright © 2008 ACMReverse engineering of the Internet is a valuable activity. Apart from providing scientific insight, the resulting datasets are invaluable in providing realistic network scenarios for other researchers. The Rocketfuel project attempted this process, but it is surprising how little effort has been made to validate its results. This paper concentrates on validating a particular inference methodology used to obtain link weights on a network. There is a basic difficulty in assessing the accuracy of such inferences in that a non-unique set of link-weights may produce the same routing, and so simple measurements of accuracy (even where ground truth data are available) do not capture the usefulness of a set of inferred weights. We propose a methodology based on predictive power to assess the quality of the weight inference. We used this to test Rocketfuel’s algorithm, and our tests suggest that it is reasonably good particularly on certain topologies, though it has limitations when its underlying assumptions are incorrect.Andrew Coyle, Miro Kraetzl, Olaf Maennel and Matthew Rougha

    AutoNetkit: simplifying large scale, open-source network experimentation

    No full text
    We present a methodology that brings simplicity to large and complex test labs by using abstraction. The networking community has appreciated the value of large scale test labs to explore complex network interactions, as seen in projects such as PlanetLab, GENI, DETER, Emulab, and SecSI. Virtualization has enabled the creation of many more such labs. However, one problem remains: it is time consuming, tedious and error prone to setup and configure large scale test networks. Separate devices need to be configured in a coordinated way, even in a virtual lab. AutoNetkit, an open source tool, uses abstractions and defaults to achieve both configuration and deployment and create such largescale virtual labs. This allows researchers and operators to explore new protocols, create complex models of networks and predict consequences of configuration changes. However, our abstractions could also allow the discussion of the broader configuration management problem. Abstractions that currently configure networks in a test lab can, in the future, be employed in configuration management tools for real networks.Simon Knight, Askar Jaboldinov, Olaf Maennel, Iain Phillips and Matthew Rougha

    AutoNetkit: Simplifying Large Scale, Open-Source Network Experimentation

    No full text
    We present a methodology that brings simplicity to large and complex test labs by using abstraction. The networking community has appreciated the value of large scale test labs to explore complex network interactions, as seen in projects such as PlanetLab, GENI, DETER, Emulab, and SecSI. Virtualization has enabled the creation of many more such labs. However, one problem remains: it is time consuming, tedious and error prone to setup and configure large scale test networks. Separate devices need to be configured in a coordinated way, even in a virtual lab. AutoNetkit, an open source tool, uses abstractions and defaults to achieve both configuration and deployment and create such largescale virtual labs. This allows researchers and operators to explore new protocols, create complex models of networks and predict consequences of configuration changes. However, our abstractions could also allow the discussion of the broader configuration management problem. Abstractions that currently configure networks in a test lab can, in the future, be employed in configuration management tools for real networks
    corecore