28 research outputs found

    BGP-Multipath Routing in the Internet

    Get PDF
    BGP-Multipath, or BGP-M, is a routing technique for balancing traffic load in the Internet. It enables a Border Gateway Protocol (BGP) border router to install multiple ‘equally-good’ paths to a destination prefix. While other multipath routing techniques are deployed at internal routers, BGP-M is deployed at border routers where traffic is shared on multiple border links between Autonomous Systems (ASes). Although there are a considerable number of research efforts on multipath routing, there is so far no dedicated measurement or study on BGP-M in the literature. This thesis presents the first systematic study on BGP-M. I proposed a novel approach to inferring the deployment of BGP-M by querying Looking Glass (LG) servers. I conducted a detailed investigation on the deployment of BGP-M in the Internet. I also analysed BGP-M’s routing properties based on traceroute measurements using RIPE Atlas probes. My research has revealed that BGP-M has already been used in the Internet. In particular, Hurricane Electric (AS6939), a Tier-1 network operator, has deployed BGP-M at border routers across its global network to hundreds of its neighbour ASes on both IPv4 and IPv6 Internet. My research has provided the state-of-the-art knowledge and insights in the deployment, configuration and operation of BGP-M. The data, methods and analysis introduced in this thesis can be immensely valuable to researchers, network operators and regulators who are interested in improving the performance and security of Internet routing. This work has raised awareness of BGP-M and may promote more deployment of BGP-M in future because BGP-M not only provides all benefits of multipath routing but also has distinct advantages in terms of flexibility, compatibility and transparency

    Optimizing Mobile Application Performance through Network Infrastructure Aware Adaptation.

    Full text link
    Encouraged by the fast adoption of mobile devices and the widespread deployment of mobile networks, mobile applications are becoming the preferred “gateways” connecting users to networking services. Although the CPU capability of mobile devices is approaching that of off-the-shelf PCs, the performance of mobile networking applications is still far behind. One of the fundamental reasons is that most mobile applications are unaware of the mobile network specific characteristics, leading to inefficient network and device resource utilization. Thus, in order to improve the user experience for most mobile applications, it is essential to dive into the critical network components along network connections including mobile networks, smartphone platforms, mobile applications, and content partners. We aim to optimize the performance of mobile network applications through network-aware resource adaptation approaches. Our techniques consist of the following four aspects: (i) revealing the fundamental infrastructure characteristics of cellular networks that are distinctive from wireline networks; (ii) isolating the impact of important factors on user perceived performance in mobile network applications; (iii) determining the particular usage patterns of mobile applications; and (iv) improving the performance of mobile applications through network aware adaptations.PhDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99829/1/qiangxu_1.pd

    IMPROVING NETWORK POLICY ENFORCEMENT USING NATURAL LANGUAGE PROCESSING AND PROGRAMMABLE NETWORKS

    Get PDF
    Computer networks are becoming more complex and challenging to operate, manage, and protect. As a result, Network policies that define how network operators should manage the network are becoming more complex and nuanced. Unfortunately, network policies are often an undervalued part of network design, leaving network operators to guess at the intent of policies that are written and fill in the gaps where policies don’t exist. Organizations typically designate Policy Committees to write down the network policies in the policy documents using high-level natural languages. The policy documents describe both the acceptable and unacceptable uses of the network. Network operators then take the responsibility of enforcing the policies and verifying whether the enforcement achieves expected requirements. Network operators often encounter gaps and ambiguous statements when translating network policies into specific network configurations. An ill-structured network policy document may prevent network operators from implementing the true intent of the policies, and thus leads to incorrect enforcement. It is thus important to know the quality of the written network policies and to remove any ambiguity that may confuse the people who are responsible for reading and implementing them. Moreover, there is a need not only to prevent policy violations from occurring but also to check for any policy violations that may have occurred (i.e., the prevention mechanisms failed in some way), since unwanted packets or network traffic, were somehow allowed to enter the network. In addition, the emergence of programmable networks provides flexible network control. Enforcing network routing policies in an environment that contains both the traditional networks and programmable networks also becomes a challenge. This dissertation presents a set of methods designed to improve network policy enforcement. We begin by describing the design and implementation of a new Network Policy Analyzer (NPA), which analyzes the written quality of network policies and outputs a quality report that can be given to Policy Committees to improve their policies. Suggestions on how to write good network policies are also provided. We also present Network Policy Conversation Engine (NPCE), a chatbot for network operators to ask questions in natural languages that check whether there is any policy violation in the network. NPCE takes advantage of recent advances in Natural Language Processing (NLP) and modern database solutions to convert natural language questions into the corresponding database queries. Next, we discuss our work towards understanding how Internet ASes connect with each other at third-party locations such as IXPs and their business relationships. Such a graph is needed to write routing policies and to calculate available routes in the future. Lastly, we present how we successfully manage network policies in a hybrid network composed of both SDN and legacy devices, making network services available over the entire network

    Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent

    Get PDF
    Mención Internacional en el título de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development by making challenges much simpler to overcome, regardless of the field in which it is used: governance, economy, education, health, etc. However, African Network Information Centre (AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the Internet footprint in such under-connected regions and determine where the situation can be improved. Along these lines, this doctoral thesis thoroughly inspects, using both active and passive data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of a methodology that could be adopted for achieving similar purposes in other developing regions. The thesis first presents our efforts to help build measurements infrastructures for alleviating the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve what we can not measure. It then unveils our timely and longitudinal inspection of the African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment of any set of similar measurements to infer the behavior of ISPs in the region. The results show a large variety of transit habits, which depend on socio-economic factors such as the language, the currency area, or the geographic location of the country in which the ISP operates. They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental paths, but also shed light on the efforts of stakeholders for traffic localization. Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate, as the prevalence of this endemic phenomenon in local Internet markets may hinder their growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements during a whole year. The analysis of these datasets reveals no evidence of widespread congestion: only 2.2% of the monitored links experienced noticeable indication of congestion, thus promoting peering. The causes of these events were identified during IXP operator interviews, showing how essential collaboration with stakeholders is to understanding the causes of performance degradations. As part of the Internet Society (ISOC) strategy to allow the Internet community to profile the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was then developed and afterward, it was deployed and tested in AfriNIC. This open source web platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which picture in real-time the status of interconnection at different levels, using public routing information available at local route-collectors with a peering viewpoint of the Internet. The results highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC (17 %) are peering in the region, a fraction that remained static from April to September 2017 despite the significant growth of IXPs in some countries. They show how ARDA can help detect the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection opportunities in Africa, the targeted region. Since broadening the underlying network is not useful without appropriately provisioned services to exploit it, the thesis then delves into the availability and utilization of the web infrastructure serving the continent. Towards this end, a comprehensive measurement methodology is applied to collect data from various sources. A focus on Google reveals that its content infrastructure in Africa is, indeed, expanding; nevertheless, much of its web content is still served from the United States (US) and Europe, although being the most popular content source in many African countries. Further, the same analysis is repeated across top global and regional websites, showing that even top African websites prefer to host their content abroad. Following that, the primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering between the networks hosting our probes and poorly configured DNS resolvers are explored to outline proposals for further ISP and CP deployments. Considering the above, an option to enrich connectivity and incentivize CPs to establish a presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection scheme, which parameterizes socio-economic, geographical, and political factors using public datasets. It demonstrates that this constrained solution doubles the percentage of continental intra-African paths, reduces their length, and drastically decreases the median of their Round Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable, and available access at the Internet frontier.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto García Martínez.- Vocal: Cristel Pelsse

    Supporting distributed computation over wide area gigabit networks

    Get PDF
    The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions

    From the edge to the core : towards informed vantage point selection for internet measurement studies

    Get PDF
    Since the early days of the Internet, measurement scientists are trying to keep up with the fast-paced development of the Internet. As the Internet grew organically over time and without build-in measurability, this process requires many workarounds and due diligence. As a result, every measurement study is only as good as the data it relies on. Moreover, data quality is relative to the research question—a data set suitable to analyze one problem may be insuïŹ€icient for another. This is entirely expected as the Internet is decentralized, i.e., there is no single observation point from which we can assess the complete state of the Internet. Because of that, every measurement study needs specifically selected vantage points, which fit the research question. In this thesis, we present three different vantage points across the Internet topology— from the edge to the Internet core. We discuss their specific features, suitability for different kinds of research questions, and how to work with the corresponding data. The data sets obtained at the presented vantage points allow us to conduct three different measurement studies and shed light on the following aspects: (a) The prevalence of IP source address spoofing at a large European Internet Exchange Point (IXP), (b) the propagation distance of BGP communities, an optional transitive BGP attribute used for traïŹ€ic engineering, and (c) the impact of the global COVID-19 pandemic on Internet usage behavior at a large Internet Service Provider (ISP) and three IXPs.Seit den frĂŒhen Tagen des Internets versuchen Forscher im Bereich Internet Measu- rement, mit der rasanten Entwicklung des des Internets Schritt zu halten. Da das Internet im Laufe der Zeit organisch gewachsen ist und nicht mit Blick auf Messbar- keit entwickelt wurde, erfordert dieser Prozess eine Meg Workarounds und Sorgfalt. Jede Measurement Studie ist nur so gut wie die Daten, auf die sie sich stĂŒtzt. Und DatenqualitĂ€t ist relativ zur Forschungsfrage - ein Datensatz, der fĂŒr die Analyse eines Problems geeiget ist, kann fĂŒr ein anderes unzureichend sein. Dies ist durchaus zu erwarten, da das Internet dezentralisiert ist, d. h. es gibt keinen einzigen Be- obachtungspunkt, von dem aus wir den gesamten Zustand des Internets beurteilen können. Aus diesem Grund benötigt jede Measurement Studie gezielt ausgewĂ€hlte Beobachtungspunkte, die zur Forschungsfrage passen. In dieser Arbeit stellen wir drei verschiedene Beobachtungspunkte vor, die sich ĂŒber die gsamte Internet-Topologie erstrecken— vom Rand bis zum Kern des Internets. Wir diskutieren ihre spezifischen Eigenschaften, ihre Eignung fĂŒr verschiedene Klas- sen von Forschungsfragen und den Umgang mit den entsprechenden Daten. Die an den vorgestellten Beobachtungspunkten gewonnenen DatensĂ€tze ermöglichen uns die DurchfĂŒhrung von drei verschiedenen Measurement Studien und damit die folgenden Aspekte zu beleuchten: (a) Die PrĂ€valenz von IP Source Address Spoofing bei einem großen europĂ€ischen Internet Exchange Point (IXP), (b) die Ausbreitungsdistanz von BGP-Communities, ein optionales transitives BGP-Attribut, das Anwendung im Bereich TraïŹ€ic-Enigneering findet sowie (c) die Auswirkungen der globalen COVID- 19-Pandemie auf das Internet-Nutzungsverhalten an einem großen Internet Service Provider (ISP) und drei IXPs

    Detecting IP prefix hijack events using BGP activity and AS connectivity analysis

    Get PDF
    The Border Gateway Protocol (BGP), the main component of core Internet connectivity, suffers vulnerability issues related to the impersonation of the ownership of IP prefixes for Autonomous Systems (ASes). In this context, a number of studies have focused on securing the BGP through several techniques, such as monitoring-based, historical-based and statistical-based behavioural models. In spite of the significant research undertaken, the proposed solutions cannot detect the IP prefix hijack accurately or even differentiate it from other types of attacks that could threaten the performance of the BGP. This research proposes three novel detection methods aimed at tracking the behaviour of BGP edge routers and detecting IP prefix hijacks based on statistical analysis of variance, the attack signature approach and a classification-based technique. The first detection method uses statistical analysis of variance to identify hijacking behaviour through the normal operation of routing information being exchanged among routers and their behaviour during the occurrence of IP prefix hijacking. However, this method failed to find any indication of IP prefix hijacking because of the difficulty of having raw BGP data hijacking-free. The research also proposes another detection method that parses BGP advertisements (announcements) and checks whether IP prefixes are announced or advertised by more than one AS. If so, events are selected for further validation using Regional Internet Registry (RIR) databases to determine whether the ASes announcing the prefixes are owned by the same organisation or different organisations. Advertisements for the same IP prefix made by ASes owned by different organisations are subsequently identified as hijacking events. The proposed algorithm of the detection method was validated using the 2008 YouTube Pakistan hijack event; the analysis demonstrates that the algorithm qualitatively increases the accuracy of detecting IP prefix hijacks. The algorithm is very accurate as long as the RIRs (Regional Internet Registries) are updated concurrently with hijacking detection. The detection method and can be integrated and work with BGP routers separately. Another detection method is proposed to detect IP prefix hijacking using a combination of signature-based (parsing-based) and classification-based techniques. The parsing technique is used as a pre-processing phase before the classification-based method. Some features are extracted based on the connectivity behaviour of the suspicious ASes given by the parsing technique. In other words, this detection method tracks the behaviour of the suspicious ASes and follows up with an analysis of their interaction with directly and indirectly connected neighbours based on a set of features extracted from the ASPATH information about the suspicious ASes. Before sending the extracted feature values to the best five classifiers that can work with the specifications of an implemented classification dataset, the detection method computes the similarity between benign and malicious behaviours to determine to what extent the classifiers can distinguish suspicious behaviour from benign behaviour and then detect the hijacking. Evaluation tests of the proposed algorithm demonstrated that the detection method was able to detect the hijacks with 96% accuracy and can be integrated and work with BGP routers separately.Saudi Cultural Burea
    corecore