56 research outputs found

    Quality of service (QoS) support for multimedia applications in large-scale networks

    Get PDF
    This dissertation studied issues pertaining to QoS provision for multimedia applications at the application layer. We initially studied Internet routing pathology and Internet routing stability by repeating experimental and analytical methods conducted by Paxson in 1996. No similar study was done in recent years. Our findings show that routing behavior of the Internet in 2006 are different from those reported in 1996 in some important aspects. Second, we investigated different stochastic models (e.g. self-similar processes, Auto-Regressive Integrated Moving-Average (ARIMA)) in order to find a suitable model that describes available bandwidth over time of an end-to-end path between two Internet hosts. Our finding of the suitable model is beneficial to predicting of future values of available bandwidth along an end-to-end path. To the best of our knowledge, no similar study was conducted. Third, we designed and evaluated a new path monitoring algorithm inferring available bandwidth of an end-to-end path without monitoring all the paths to minimize monitoring overhead. Our algorithm does not rely on underlying network-layer topology information as required in topology-aware path monitoring techniques. Finally, to complement the above study, we introduced our multicast protocol named core-set routing for transmitting multimedia data from a set of senders to a set of receivers, taking QoS into account. The protocol is suitable for interactive multi-sender multimedia applications such as video conferencing and network gaming

    Internet Measurement

    Get PDF
    Nowadays, TCP channel estimation is a matter of great importance, being communication network metrology the core of network performance analysis field, since it allows to interpret and understand the network behaviour through the gathered metrics. In the context of this dissertation, an open source software project, available on GitHub, was developed. It uses a client-server architecture to estimate the Bulk Transfer Capacity (BTC) and provides portability due to Java and Android clients, being able to run on computers, tablets and mobile phones. Two algorithms to measure the BTC were deployed. Their measuring capacity was analysed and optimized, supported on studies about the influence of the TCP windows. The packet train dispersion algorithm was also implemented and analysed, but it did not allow measuring significant BTC results. The performance of the tool was tested for wired and cellular wireless networks, considering all the major Portuguese network operators. The results were compared to the ones measured by the iPerf3 reference tool, considering a stop criteria based on Jain’s Fairness Index [1] in order to inject the less possible traffic into the network. The measurement results are in line with the methodology proposed by ETSI and Ofcom to monitor the bandwidth, considering fixed time transmissions, and can contribute to reduce the transmission durations required to analyse each network

    Compact routing for the future internet

    Get PDF
    The Internet relies on its inter-domain routing system to allow data transfer between any two endpoints regardless of where they are located. This routing system currently uses a shortest path routing algorithm (modified by local policy constraints) called the Border Gateway Protocol. The massive growth of the Internet has led to large routing tables that will continue to grow. This will present a serious engineering challenge for router designers in the long-term, rendering state (routing table) growth at this pace unsustainable. There are various short-term engineering solutions that may slow the growth of the inter-domain routing tables, at the expense of increasing the complexity of the network. In addition, some of these require manual configuration, or introduce additional points of failure within the network. These solutions may give an incremental, constant factor, improvement. However, we know from previous work that all shortest path routing algorithms require forwarding state that grows linearly with the size of the network in the worst case. Rather than attempt to sustain inter-domain routing through a shortest path routing algorithm, compact routing algorithms exist that guarantee worst-case sub-linear state requirements at all nodes by allowing an upper-bound on path length relative to the theoretical shortest path, known as path stretch. Previous work has shown the promise of these algorithms when applied to synthetic graphs with similar properties to the known Internet graph, but they haven't been studied in-depth on Internet topologies derived from real data. In this dissertation, I demonstrate the consistently strong performance of these compact routing algorithms for inter-domain routing by performing a longitudinal study of two compact routing algorithms on the Internet Autonomous System (AS) graph over time. I then show, using the k-cores graph decomposition algorithm, that the structurally important nodes in the AS graph are highly stable over time. This property makes these nodes suitable for use as the "landmark" nodes used by the most stable of the compact routing algorithms evaluated, and the use of these nodes shows similar strong routing performance. Finally, I present a decentralised compact routing algorithm for dynamic graphs, and present state requirements and message overheads on AS graphs using realistic simulation inputs. To allow the continued long-term growth of Internet routing state, an alternative routing architecture may be required. The use of the compact routing algorithms presented in this dissertation offer promise for a scalable future Internet routing system

    Supporting distributed computation over wide area gigabit networks

    Get PDF
    The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions

    vrfinder: Finding outbound addresses in traceroute

    Get PDF
    Current methods to analyze the Internet's router-level topology with paths collected using traceroute assume that the source address for each router in the path is either an inbound or off-path address on each router. In this work, we show that outbound addresses are common in our Internet-wide traceroute dataset collected by CAIDA's Ark vantage points in January 2020, accounting for 1.7% - 5.8% of the addresses seen at some point before the end of a traceroute. This phenomenon can lead to mistakes in Internet topology analysis, such as inferring router ownership and identifying interdomain links. We hypothesize that the primary contributor to outbound addresses is Layer 3 Virtual Private Networks (L3VPNs), and propose vrfinder, a technique for identifying L3VPN outbound addresses in traceroute collections. We validate vrfinder against ground truth from two large research and education networks, demonstrating high precision (100.0%) and recall (82.1% - 95.3%). We also show the benefit of accounting for L3VPNs in traceroute analysis through extensions to bdrmapIT, increasing the accuracy of its router ownership inferences for L3VPN outbound addresses from 61.5% - 79.4% to 88.9% - 95.5%

    Improving the accuracy of spoofed traffic inference in inter-domain traffic

    Get PDF
    Ascertaining that a network will forward spoofed traffic usually requires an active probing vantage point in that network, effectively preventing a comprehensive view of this global Internet vulnerability. We argue that broader visibility into the spoofing problem may lie in the capability to infer lack of Source Address Validation (SAV) compliance from large, heavily aggregated Internet traffic data, such as traffic observable at Internet Exchange Points (IXPs). The key idea is to use IXPs as observatories to detect spoofed packets, by leveraging Autonomous System (AS) topology knowledge extracted from Border Gateway Protocol (BGP) data to infer which source addresses should legitimately appear across parts of the IXP switch fabric. In this thesis, we demonstrate that the existing literature does not capture several fundamental challenges to this approach, including noise in BGP data sources, heuristic AS relationship inference, and idiosyncrasies in IXP interconnec- tivity fabrics. We propose Spoofer-IX, a novel methodology to navigate these challenges, leveraging Customer Cone semantics of AS relationships to guide precise classification of inter-domain traffic as In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, and Unas- signed. We apply our methodology on extensive data analysis using real traffic data from two distinct IXPs in Brazil, a mid-size and a large-size infrastructure. In the mid-size IXP with more than 200 members, we find an upper bound volume of Out-of-cone traffic to be more than an order of magnitude less than the previous method inferred on the same data, revealing the practical importance of Customer Cone semantics in such analysis. We also found no significant improvement in deployment of SAV in networks using the mid-size IXP between 2017 and 2019. In hopes that our methods and tools generalize to use by other IXPs who want to avoid use of their infrastructure for launching spoofed-source DoS attacks, we explore the feasibility of scaling the system to larger and more diverse IXP infrastructures. To promote this goal, and broad replicability of our results, we make the source code of Spoofer-IX publicly available. This thesis illustrates the subtleties of scientific assessments of operational Internet infrastructure, and the need for a community focus on reproducing and repeating previous methods.A constatação de que uma rede encaminhará tráfego falsificado geralmente requer um ponto de vantagem ativo de medição nessa rede, impedindo efetivamente uma visão abrangente dessa vulnerabilidade global da Internet. Isto posto, argumentamos que uma visibilidade mais ampla do problema de spoofing pode estar na capacidade de inferir a falta de conformidade com as práticas de Source Address Validation (SAV) a partir de dados de tráfego da Internet altamente agregados, como o tráfego observável nos Internet Exchange Points (IXPs). A ideia chave é usar IXPs como observatórios para detectar pacotes falsificados, aproveitando o conhecimento da topologia de sistemas autônomos extraído dos dados do protocolo BGP para inferir quais endereços de origem devem aparecer legitimamente nas comunicações através da infra-estrutura de um IXP. Nesta tese, demonstramos que a literatura existente não captura diversos desafios fundamentais para essa abordagem, incluindo ruído em fontes de dados BGP, inferência heurística de relacionamento de sistemas autônomos e características específicas de interconectividade nas infraestruturas de IXPs. Propomos o Spoofer-IX, uma nova metodologia para superar esses desafios, utilizando a semântica do Customer Cone de relacionamento de sistemas autônomos para guiar com precisão a classificação de tráfego inter-domínio como In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, e Unassigned. Aplicamos nossa metodologia em análises extensivas sobre dados reais de tráfego de dois IXPs distintos no Brasil, uma infraestrutura de médio porte e outra de grande porte. No IXP de tamanho médio, com mais de 200 membros, encontramos um limite superior do volume de tráfego Out-of-cone uma ordem de magnitude menor que o método anterior inferiu sob os mesmos dados, revelando a importância prática da semântica do Customer Cone em tal análise. Além disso, não encontramos melhorias significativas na implantação do Source Address Validation (SAV) em redes usando o IXP de tamanho médio entre 2017 e 2019. Na esperança de que nossos métodos e ferramentas sejam aplicáveis para uso por outros IXPs que desejam evitar o uso de sua infraestrutura para iniciar ataques de negação de serviço através de pacotes de origem falsificada, exploramos a viabilidade de escalar o sistema para infraestruturas IXP maiores e mais diversas. Para promover esse objetivo e a ampla replicabilidade de nossos resultados, disponibilizamos publicamente o código fonte do Spoofer-IX. Esta tese ilustra as sutilezas das avaliações científicas da infraestrutura operacional da Internet e a necessidade de um foco da comunidade na reprodução e repetição de métodos anteriores

    Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent

    Get PDF
    Mención Internacional en el título de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development by making challenges much simpler to overcome, regardless of the field in which it is used: governance, economy, education, health, etc. However, African Network Information Centre (AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the Internet footprint in such under-connected regions and determine where the situation can be improved. Along these lines, this doctoral thesis thoroughly inspects, using both active and passive data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of a methodology that could be adopted for achieving similar purposes in other developing regions. The thesis first presents our efforts to help build measurements infrastructures for alleviating the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve what we can not measure. It then unveils our timely and longitudinal inspection of the African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment of any set of similar measurements to infer the behavior of ISPs in the region. The results show a large variety of transit habits, which depend on socio-economic factors such as the language, the currency area, or the geographic location of the country in which the ISP operates. They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental paths, but also shed light on the efforts of stakeholders for traffic localization. Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate, as the prevalence of this endemic phenomenon in local Internet markets may hinder their growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements during a whole year. The analysis of these datasets reveals no evidence of widespread congestion: only 2.2% of the monitored links experienced noticeable indication of congestion, thus promoting peering. The causes of these events were identified during IXP operator interviews, showing how essential collaboration with stakeholders is to understanding the causes of performance degradations. As part of the Internet Society (ISOC) strategy to allow the Internet community to profile the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was then developed and afterward, it was deployed and tested in AfriNIC. This open source web platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which picture in real-time the status of interconnection at different levels, using public routing information available at local route-collectors with a peering viewpoint of the Internet. The results highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC (17 %) are peering in the region, a fraction that remained static from April to September 2017 despite the significant growth of IXPs in some countries. They show how ARDA can help detect the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection opportunities in Africa, the targeted region. Since broadening the underlying network is not useful without appropriately provisioned services to exploit it, the thesis then delves into the availability and utilization of the web infrastructure serving the continent. Towards this end, a comprehensive measurement methodology is applied to collect data from various sources. A focus on Google reveals that its content infrastructure in Africa is, indeed, expanding; nevertheless, much of its web content is still served from the United States (US) and Europe, although being the most popular content source in many African countries. Further, the same analysis is repeated across top global and regional websites, showing that even top African websites prefer to host their content abroad. Following that, the primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering between the networks hosting our probes and poorly configured DNS resolvers are explored to outline proposals for further ISP and CP deployments. Considering the above, an option to enrich connectivity and incentivize CPs to establish a presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection scheme, which parameterizes socio-economic, geographical, and political factors using public datasets. It demonstrates that this constrained solution doubles the percentage of continental intra-African paths, reduces their length, and drastically decreases the median of their Round Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable, and available access at the Internet frontier.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto García Martínez.- Vocal: Cristel Pelsse

    Design of Overlay Networks for Internet Multicast - Doctoral Dissertation, August 2002

    Get PDF
    Multicast is an efficient transmission scheme for supporting group communication in networks. Contrasted with unicast, where multiple point-to-point connections must be used to support communications among a group of users, multicast is more efficient because each data packet is replicated in the network – at the branching points leading to distinguished destinations, thus reducing the transmission load on the data sources and traffic load on the network links. To implement multicast, networks need to incorporate new routing and forwarding mechanisms in addition to the existing are not adequately supported in the current networks. The IP multicast are not adequately supported in the current networks. The IP multicast solution has serious scaling and deployment limitations, and cannot be easily extended to provide more enhanced data services. Furthermore, and perhaps most importantly, IP multicast has ignored the economic nature of the problem, lacking incentives for service providers to deploy the service in wide area networks. Overlay multicast holds promise for the realization of large scale Internet multicast services. An overlay network is a virtual topology constructed on top of the Internet infrastructure. The concept of overlay networks enables multicast to be deployed as a service network rather than a network primitive mechanism, allowing deployment over heterogeneous networks without the need of universal network support. This dissertation addresses the network design aspects of overlay networks to provide scalable multicast services in the Internet. The resources and the network cost in the context of overlay networks are different from that in conventional networks, presenting new challenges and new problems to solve. Our design goal are the maximization of network utility and improved service quality. As the overall network design problem is extremely complex, we divide the problem into three components: the efficient management of session traffic (multicast routing), the provisioning of overlay network resources (bandwidth dimensioning) and overlay topology optimization (service placement). The combined solution provides a comprehensive procedure for planning and managing an overlay multicast network. We also consider a complementary form of overlay multicast called application-level multicast (ALMI). ALMI allows end systems to directly create an overlay multicast session among themselves. This gives applications the flexibility to communicate without relying on service provides. The tradeoff is that users do not have direct control on the topology and data paths taken by the session flows and will typically get lower quality of service due to the best effort nature of the Internet environment. ALMI is therefore suitable for sessions of small size or sessions where all members are well connected to the network. Furthermore, the ALMI framework allows us to experiment with application specific components such as data reliability, in order to identify a useful set of communication semantic for enhanced data services

    Improving the Accuracy of the Internet Cartography

    Get PDF
    As the global Internet expands to satisfy the demands of the ever-increasing connected population, profound changes are occurring in its interconnection structure. The pervasive growth of IXPs and CDNs, two initially independent but synergistic infrastructure sectors, have contributed to the gradual flattening of the Internet’s inter-domain hierarchy with primary routing paths shifting from backbone networks to peripheral peering links. At the same time the IPv6 deployment has taken off due to the depletion of unallocated IPv4 addresses. These fundamental changes in Internet dynamics has obvious implications for network engineering and operations, which can be benefited by accurate topology maps to understand the properties of this critical infrastructure. This thesis presents a set of new measurement techniques and inference algorithms to construct a new type of semantically rich Internet map, and improve the state of the art in Internet cartography. The author first develops a methodology to extract large-scale validation data from the Communities BGP attribute, which encodes rich routing meta-data on BGP messages. Based on this better-informed dataset the author proceeds to analyse popular assumptions about inter-domain routing policies and devise a more accurate model to describe inter-AS business relationships. Accordingly, the thesis proposes a new relationship inference algorithm to accurately capture both simple and complex AS relationships across two dimensions: prefix type, and geographic location. Validation against three sources of ground-truth data reveals that the proposed algorithm achieves a near-perfect accuracy. However, any inference approach is constrained by the inability of the existing topology data sources to provide a complete view of the inter-domain topology. To limit the topology incompleteness problem the author augments traditional BGP data with routing policy data obtained directly from IXPs to discover massive peering meshes which have thus far been largely invisible

    Testing the performance of a commercial active network measurement platform

    Get PDF
    Diplomityössä testataan ja mitataan yhden kaupallisen aktiivimittausalustan suorituskyky ja tarkkuus. Myös alustan kyky havaita tiettyjä tapahtumia tietoverkoissa testataan. Testeissä on mukana kaksi erityyppistä alustaan kuuluvaa mittalaitetta: alhaisen suorituskyvyn Brix 100 Verifier ja tehokkaampi Brix 1000 Verifier. Testauksen tuloksena voidaan sanoa, että molemmat mittalaitetyypit soveltuvat hyvin kiertoaikaviiveen mittaamiseen. Yhdensuuntaisen viiveen mittaukseen Brix 100 ei sovellu etenkään mitattaessa alhaisia viivetasoja (∼1ms). Ulkoista synkronisointilähdettä, kuten GPS-kelloa, käytettäessä Brix 1000 -mittalaitetta voidaan käyttää myös yhdensuuntaisen viiveen mittaamiseen. Mittausalusta havaitsee verkossa tapahtuvat kuormitustilanteet ja reititinviat, mutta se ei kykene havaitsemaan lyhyitä alle sekunnin mittaisia katkoja. Työn teoriaosassa esitellään joitain tunnettuja aktiivimittausmekasimeja ja -metodeja sekä pureudutaan aktiivimittauksiin ja niiden ongelmakohtiin yleisellä tasolla. Lisäksi työssä esitellään tunnettuja akateemisia aktiivimittaukseen liittyviä projekteja.In this thesis, a commercial active network measurement platform is tested for performance and accuracy. The platform is also tested for ability to detect certain events in networks. Two types of measurement probes are tested: the low performance Brix 100 Verifier and the high performance Brix 1000 Verifier. It is found that both platform's measurement probe types are accurate when measuring round-trip delay, but do not perform nearly as well when measuring one-way delay. External synchronization, such as GPS, helps the Brix 1000 Verifier to reach sub-millisecond measurement accuracy. As Brix 100 Verifiers do not support external synchronization, their accuracy is suitable only for measuring one-way delays larger than a few milliseconds. The platform is able to detect sudden high load levels and router failures in a network, but fails to detect short (sub-second) link breaks. In the theory part of this thesis, some well known active measurement methods and mechanisms are presented. Also, challenges related to active measurement are discussed and some of the recent major academic active measurement projects are introduced
    corecore