36 research outputs found

    Per-hop Internet Measurement Protocols

    Get PDF
    Accurately measuring per-hop packet dynamics on an Internet path is difficult. Currently available techniques have many well-known limitations that can make it difficult to accurately measure per-hop packet dynamics. Much of the difficulty of per-hop measurement is due to the lack of protocol support available to measure an Internet path on a per-hop basis. This thesis classifies common weaknesses and describes a protocol for per-hop measurement of Internet packet dynamics, known as the IP Measurement Protocol, or IPMP. With IPMP, a specially formed probe packet collects information from intermediate routers on the packet's dynamics as the packet is forwarded. This information includes an IP address from the interface that received the packet, a timestamp that records when the packet was received, and a counter that records the arrival order of echo packets belonging to the same flow. Probing a path with IPMP allows the topology of the path to be directly determined, and for direct measurement of per-hop behaviours such as queueing delay, jitter, reordering, and loss. This is useful in many operational situations, as well as for researchers in characterising Internet behaviour. IPMP's design goals of being tightly constrained and easy to implement are tested by building implementations in hardware and software. Implementations of IPMP presented in this thesis show that an IPMP measurement probe can be processed in hardware without delaying the packet, and processed in software with little overhead. This thesis presents IPMP-based measurement techniques for measuring per-hop packet delay, jitter, loss, reordering, and capacity that are more robust, require less probes to be sent, and are potentially more accurate and convenient than corresponding measurement techniques that do not use IPMP

    Algorithms and Requirements for Measuring Network Bandwidth

    Full text link

    Informing protocol design through crowdsourcing measurements

    Get PDF
    Mención Internacional en el título de doctorMiddleboxes, such as proxies, firewalls and NATs play an important role in the modern Internet ecosystem. On one hand, they perform advanced functions, e.g. traffic shaping, security or enhancing application performance. On the other hand, they turn the Internet into a hostile ecosystem for innovation, as they limit the deviation from deployed protocols. It is therefore essential, when designing a new protocol, to first understand its interaction with the elements of the path. The emerging area of crowdsourcing solutions can help to shed light on this issue. Such approach allows us to reach large and different sets of users and also different types of devices and networks to perform Internet measurements. In this thesis, we show how to make informed protocol design choices by expanding the traditional crowdsourcing focus from the human element and using crowdsourcing large scale measurement platforms. We consider specific use cases, namely the case of pervasive encryption in the modern Internet, TCP Fast Open and ECN++. We consider such use cases to advance the global understanding on whether wide adoption of encryption is possible in today’s Internet or the adoption of encryption is necessary to guarantee the proper functioning of HTTP/2. We target ECN and particularly ECN++, given its succession of deployment problems. We then measured ECN deployment over mobile as well as fixed networks. In the process, we discovered some bad news for the base ECN protocol—more than half the mobile carriers we tested wipe the ECN field at the first upstream hop. This thesis also reports the good news that, wherever ECN gets through, we found no deployment problems for the ECN++ enhancement. The thesis includes the results of other more in-depth tests to check whether servers that claim to support ECN, actually respond correctly to explicit congestion feedback, including some surprising congestion behaviour unrelated to ECN. This thesis also explores the possible causes that ossify the modern Internet and make difficult the advancement of the innovation. Network Address Translators (NATs) are a commonplace in the Internet nowadays. It is fair to say that most of the residential and mobile users are connected to the Internet through one or more NATs. As any other technology, NAT presents upsides and downsides. Probably the most acknowledged downside of the NAT technology is that it introduces additional difficulties for some applications such as peer-to-peer applications, gaming and others to function properly. This is partially due to the nature of the NAT technology but also due to the diversity of behaviors of the different NAT implementations deployed in the Internet. Understanding the properties of the currently deployed NAT base provides useful input for application and protocol developers regarding what to expect when deploying new application in the Internet. We develop NATwatcher, a tool to test NAT boxes using a crowdsourcingbased measurement methodology. We also perform large scale active measurement campaigns to detect CGNs in fixed broadband networks using NAT Revelio, a tool we have developed and validated. Revelio enables us to actively determine from within residential networks the type of upstream network address translation, namely NAT at the home gateway (customer-grade NAT) or NAT in the ISP (Carrier Grade NAT). We deploy Revelio in the FCC Measuring Broadband America testbed operated by SamKnows and also in the RIPE Atlas testbed. A part of this thesis focuses on characterizing CGNs in Mobile Network Operators (MNOs). We develop a measuring tool, called CGNWatcher that executes a number of active tests to fully characterize CGN deployments in MNOs. The CGNWatcher tool systematically tests more than 30 behavioural requirements of NATs defined by the Internet Engineering Task Force (IETF) and also multiple CGN behavioural metrics. We deploy CGNWatcher in MONROE and performed large measurement campaigns to characterize the real CGN deployments of the MNOs serving the MONROE nodes. We perform a large measurement campaign using the tools described above, recruiting over 6,000 users, from 65 different countries and over 280 ISPs. We validate our results with the ISPs at the IP level and, reported to the ground truth we collected. To the best of our knowledge, this represents the largest active measurement study of (confirmed) NAT or CGN deployments at the IP level in fixed and mobile networks to date. As part of the thesis, we characterize roaming across Europe. The goal of the experiment was to try to understand if the MNO changes CGN while roaming, for this reason, we run a series of measurements that enable us to identify the roaming setup, infer the network configuration for the 16 MNOs that we measure and quantify the end-user performance for the roaming configurations which we detect. We build a unique roaming measurement platform deployed in six countries across Europe. Using this platform, we measure different aspects of international roaming in 3G and 4G networks, including mobile network configuration, performance characteristics, and content discrimination. We find that operators adopt common approaches to implementing roaming, resulting in additional latency penalties of 60 ms or more, depending on geographical distance. Considering content accessibility, roaming poses additional constraints that leads to only minimal deviations when accessing content in the original country. However, geographical restrictions in the visited country make the picture more complicated and less intuitive. Results included in this thesis would provide useful input for application, protocol designers, ISPs and researchers that aim to make their applications and protocols to work across the modern Internet.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Gonzalo Camarillo González.- Secretario: María Carmen Guerrero López.- Vocal: Andrés García Saavedr

    Internet Measurement

    Get PDF
    Nowadays, TCP channel estimation is a matter of great importance, being communication network metrology the core of network performance analysis field, since it allows to interpret and understand the network behaviour through the gathered metrics. In the context of this dissertation, an open source software project, available on GitHub, was developed. It uses a client-server architecture to estimate the Bulk Transfer Capacity (BTC) and provides portability due to Java and Android clients, being able to run on computers, tablets and mobile phones. Two algorithms to measure the BTC were deployed. Their measuring capacity was analysed and optimized, supported on studies about the influence of the TCP windows. The packet train dispersion algorithm was also implemented and analysed, but it did not allow measuring significant BTC results. The performance of the tool was tested for wired and cellular wireless networks, considering all the major Portuguese network operators. The results were compared to the ones measured by the iPerf3 reference tool, considering a stop criteria based on Jain’s Fairness Index [1] in order to inject the less possible traffic into the network. The measurement results are in line with the methodology proposed by ETSI and Ofcom to monitor the bandwidth, considering fixed time transmissions, and can contribute to reduce the transmission durations required to analyse each network

    Passive available bandwidth: Applying self -induced congestion analysis of application-generated traffic

    Get PDF
    Monitoring end-to-end available bandwidth is critical in helping applications and users efficiently use network resources. Because the performance of distributed systems is intrinsically linked to the performance of the network, applications that have knowledge of the available bandwidth can adapt to changing network conditions and optimize their performance. A well-designed available bandwidth tool should be easily deployable and non-intrusive. While several tools have been created to actively measure the end-to-end available bandwidth of a network path, they require instrumentation at both ends of the path, and the traffic injected by these tools may affect the performance of other applications on the path.;We propose a new passive monitoring system that accurately measures available bandwidth by applying self-induced congestion analysis to traces of application-generated traffic. The Watching Resources from the Edge of the Network (Wren) system transparently provides available bandwidth information to applications without having to modify the applications to make the measurements and with negligible impact on the performance of applications. Wren produces a series of real-time available bandwidth measurements that can be used by applications to adapt their runtime behavior to optimize performance or that can be sent to a central monitoring system for use by other or future applications.;Most active bandwidth tools rely on adjustments to the sending rate of packets to infer the available bandwidth. The major obstacle with using passive kernel-level traces of TCP traffic is that we have no control over the traffic pattern. We demonstrate that there is enough natural variability in the sending rates of TCP traffic that techniques used by active tools can be applied to traces of application-generated traffic to yield accurate available bandwidth measurements.;Wren uses kernel-level instrumentation to collect traces of application traffic and analyzes the traces in the user-level to achieve the necessary accuracy and avoid intrusiveness. We introduce new passive bandwidth algorithms based on the principles of the active tools to measure available bandwidth, investigate the effectiveness of these new algorithms, implement a real-time system capable of efficiently monitoring available bandwidth, and demonstrate that applications can use Wren measurements to adapt their runtime decisions

    Efficient techniques for end-to-end bandwidth estimation: performance evaluations and scalable deployment

    Get PDF
    Several applications, services, and protocols are conjectured to benefit from the knowledge of the end-to-end available bandwidth on a given Internet path. Unfortunately, despite the availability of several bandwidth estimation techniques, there has been only a limited adoption of these in contemporary applications. We identify two issues that contribute to this state of affairs. First, there is a lack of comprehensive evaluations that can help application developers in calibrating the relative performance of these tools--this is especially limiting since the performance of these tools depends on algorithmic, implementation, as well as temporal aspects of probing for available bandwidth. Second, most existing bandwidth estimation tools impose a large probing overhead on the paths over which bandwidth is measured. This can be a significant deterrent for deploying these tools in distributed infrastructures that need to measure bandwidth on several paths periodically. In this dissertation, we address the two issues raised above by making the following contributions: We conduct the first comprehensive black-box evaluation of a large suite of prominent available bandwidth estimation tools on a high-speed network. In this evaluation,we also illustrate the impact that technological and implementation limitations can have on the performance of bandwidth-estimation tools. We conduct the first comprehensive evaluation of available bandwidth estimation algorithms, independent of systemic and implementation biases. In this evaluation, we also illustrate the impact temporal factor such as measurement timescales have on the observed relative performance of bandwidth-estimation tools. We demonstrate that temporal properties can significantly impact the AB estimation process. We redesign the interfaces of existing bandwidth-estimation tools to allow temporal parameters to be explicitly specified and controlled. We design AB inference schemes which can be used to scalably and collaboratively infer the available bandwidth for a large set of end-to-end paths. These schemes allow an operator to select the desired operating point in the trade-off between accuracy and overhead of AB estimation. We further demonstrate that in order to monitor the bandwidth on all paths of a network we do not need access to per-hop bandwidth estimates and can simply rely on end-to-end bandwidth estimates

    Interoperability of wireless communication technologies in hybrid networks : evaluation of end-to-end interoperability issues and quality of service requirements

    Get PDF
    Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues. The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies). The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks. Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications. The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way. The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools. This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks. Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research. It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites. The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Testing the performance of a commercial active network measurement platform

    Get PDF
    Diplomityössä testataan ja mitataan yhden kaupallisen aktiivimittausalustan suorituskyky ja tarkkuus. Myös alustan kyky havaita tiettyjä tapahtumia tietoverkoissa testataan. Testeissä on mukana kaksi erityyppistä alustaan kuuluvaa mittalaitetta: alhaisen suorituskyvyn Brix 100 Verifier ja tehokkaampi Brix 1000 Verifier. Testauksen tuloksena voidaan sanoa, että molemmat mittalaitetyypit soveltuvat hyvin kiertoaikaviiveen mittaamiseen. Yhdensuuntaisen viiveen mittaukseen Brix 100 ei sovellu etenkään mitattaessa alhaisia viivetasoja (∼1ms). Ulkoista synkronisointilähdettä, kuten GPS-kelloa, käytettäessä Brix 1000 -mittalaitetta voidaan käyttää myös yhdensuuntaisen viiveen mittaamiseen. Mittausalusta havaitsee verkossa tapahtuvat kuormitustilanteet ja reititinviat, mutta se ei kykene havaitsemaan lyhyitä alle sekunnin mittaisia katkoja. Työn teoriaosassa esitellään joitain tunnettuja aktiivimittausmekasimeja ja -metodeja sekä pureudutaan aktiivimittauksiin ja niiden ongelmakohtiin yleisellä tasolla. Lisäksi työssä esitellään tunnettuja akateemisia aktiivimittaukseen liittyviä projekteja.In this thesis, a commercial active network measurement platform is tested for performance and accuracy. The platform is also tested for ability to detect certain events in networks. Two types of measurement probes are tested: the low performance Brix 100 Verifier and the high performance Brix 1000 Verifier. It is found that both platform's measurement probe types are accurate when measuring round-trip delay, but do not perform nearly as well when measuring one-way delay. External synchronization, such as GPS, helps the Brix 1000 Verifier to reach sub-millisecond measurement accuracy. As Brix 100 Verifiers do not support external synchronization, their accuracy is suitable only for measuring one-way delays larger than a few milliseconds. The platform is able to detect sudden high load levels and router failures in a network, but fails to detect short (sub-second) link breaks. In the theory part of this thesis, some well known active measurement methods and mechanisms are presented. Also, challenges related to active measurement are discussed and some of the recent major academic active measurement projects are introduced
    corecore