11 research outputs found

    On the importance of Internet eXchange Points for today's Internet ecosystem

    Full text link
    Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network Access Points that were mandated as part of the decommissioning of the NSFNET in 1994/95 to facilitate the transition from the NSFNET to the "public Internet" as we know it today. While this popular view does not tell the whole story behind the early beginnings of IXPs, what is true is that since around 1994, the number of operational IXPs worldwide has grown to more than 300 (as of May 2013), with the largest IXPs handling daily traffic volumes comparable to those carried by the largest Tier-1 ISPs, but IXPs have never really attracted any attention from the networking research community. At first glance, this lack of interest seems understandable as IXPs have apparently little to do with current "hot" topic areas such as data centers and cloud services or software defined networking (SDN) and mobile communication. However, we argue in this article that, in fact, IXPs are all about data centers and cloud services and even SDN and mobile communication and should be of great interest to networking researchers interested in understanding the current and future Internet ecosystem. To this end, we survey the existing but largely unknown sources of publicly available information about IXPs to describe their basic technical and operational aspects and highlight the critical differences among the various IXPs in the different regions of the world, especially in Europe and North America. More importantly, we illustrate the important role that IXPs play in today's Internet ecosystem and discuss how IXP-driven innovation in Europe is shaping and redefining the Internet marketplace, not only in Europe but increasingly so around the world.Comment: 10 pages, keywords: Internet Exchange Point, Internet Architecture, Peering, Content Deliver

    Evaluating competition in the Internet’s infrastructure: a view of GAFAM from the Internet exchanges

    Get PDF
    The Internet has given rise to online platforms offering unrivalled access to diverse markets and services. At the application-layer, consolidation and concentration is framed as a threat to competition and diversity, with dominant players facing antitrust challenges in the US and the EU. Within the infrastructure though, concentration creates economies of scale that makes many of the resource-intensive building blocks of the Internet economy – such as global content delivery and distributed hosting – available to even the smallest innovator. This work complements existing analyses by exploring the links between these layers, differentiating between the implications of application-layer consolidation and the efficiencies of concentration at lower layers of the Internet’s infrastructure. In particular, these differences are presented from the vantage point of Internet exchanges, evaluating consolidation in terms of the distribution of these essential building blocks and how IXes’ governance norms lower barriers to accessing these resources. While promising, the spectre of predatory practices at the application layer remains. This article concludes by arguing that the indicators presented here highlighting regulatory interventions must effectively account for the complex interdependencies among these platforms

    On the Analysis of the Internet from a Geographic and Economic Perspective via BGP Raw Data

    Get PDF
    The Internet is nowadays an integral part of the everyone's life, and will become even more important for future generations. Proof of that is the exponential growth of the number of people who are introduced to the network through mobile phones and smartphones and are connected 24/7. Most of them rely on the Internet even for common services, such as online personal bank accounts, or even having a videoconference with a colleague living across the ocean. However, there are only a few people who are aware of what happens to their data once sent from their own devices towards the Internet, and an even smaller number -- represented by an elite of researchers -- have an overview of the infrastructure of the real Internet. Researchers have attempted during the last years to discover details about the characteristics of the Internet in order to create a model on which it would be possible to identify and address possible weaknesses of the real network. Despite several efforts in this direction, currently no model is known to represent the Internet effectively, especially due to the lack of data and the excessive coarse granularity applied by the studies done to date. This thesis addresses both issues considering Internet as a graph whose nodes are represented by Autonomous Systems (AS) and connections are represented by logical connections between ASes. In the first instance, this thesis has the objective to provide new algorithms and heuristics for studying the Internet at a level of granularity considerably more relevant to reality, by introducing economic and geographical elements that actually limit the number of possible paths between the various ASes that data can undertake. Based on these heuristics, this thesis also provides an innovative methodology suitable to quantify the completeness of the available data to identify which ASes should be involved in the BGP data collection process as feeders in order to get a complete and real view of the core of the Internet. Although the results of this methodology highlights that current BGP route collectors are not able to obtain data regarding the vast majority of the ASes part of the core of the Internet, the situation can still be improved by creating new services and incentives to attract the ASes identified by the previous methodology and introduce them as feeders of a BGP route collector

    Network overload avoidance by traffic engineering and content caching

    Get PDF
    The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching. This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months. The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type. For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands. This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    Internet traffic volumes characterization and forecasting

    Get PDF
    Internet usage increases every year and the need to estimate the growth of the generated traffic has become a major topic. Forecasting actual figures in advance is essential for bandwidth allocation, networking design and investment planning. In this thesis novel mathematical equations are presented to model and to predict long-term Internet traffic in terms of total aggregating volume, globally and more locally. Historical traffic data from consecutive years have revealed hidden numerical patterns as the values progress year over year and this trend can be well represented with appropriate mathematical relations. The proposed formulae have excellent fitting properties over long-history measurements and can indicate forthcoming traffic for the next years with an exceptionally low prediction error. In cases where pending traffic data have already become available, the suggested equations provide more successful results than the respective projections that come from worldwide leading research. The studies also imply that future traffic strongly depends on the past activity and on the growth of Internet users, provided that a big and representative sample of pertinent data exists from large geographical areas. To the best of my knowledge this work is the first to introduce effective prediction methods that exclusively rely on the static attributes and the progression properties of historical values

    Inferring multilateral peering

    Get PDF
    The AS topology incompleteness problem is derived from difficulties in the discovery of p2p links, and is amplified by the increasing popularity of Internet eXchange Points (IXPs) to support peering interconnection. We describe, implement, and validate a method for discovering currently invisible IXP peering links by mining BGP communities used by IXP route servers to implement multilateral peering (MLP), including communities that signal the intent to restrict announcements to a subset of participants at a given IXP. Using route server data juxtaposed with a mapping of BGP community values, we can infer 206K p2p links from 13 large European IXPs, four times more p2p links than what is directly observable in public BGP data. The advantages of the proposed technique are threefold. First, it utilizes existing BGP data sources and does not require the deployment of additional vantage points nor the acquisition of private data. Second, it requires only a few active queries, facilitating repeatability of the measurements. Finally, it offers a new source of data regarding the dense establishment of MLP at IXPs

    Improving the Accuracy of the Internet Cartography

    Get PDF
    As the global Internet expands to satisfy the demands of the ever-increasing connected population, profound changes are occurring in its interconnection structure. The pervasive growth of IXPs and CDNs, two initially independent but synergistic infrastructure sectors, have contributed to the gradual flattening of the Internet’s inter-domain hierarchy with primary routing paths shifting from backbone networks to peripheral peering links. At the same time the IPv6 deployment has taken off due to the depletion of unallocated IPv4 addresses. These fundamental changes in Internet dynamics has obvious implications for network engineering and operations, which can be benefited by accurate topology maps to understand the properties of this critical infrastructure. This thesis presents a set of new measurement techniques and inference algorithms to construct a new type of semantically rich Internet map, and improve the state of the art in Internet cartography. The author first develops a methodology to extract large-scale validation data from the Communities BGP attribute, which encodes rich routing meta-data on BGP messages. Based on this better-informed dataset the author proceeds to analyse popular assumptions about inter-domain routing policies and devise a more accurate model to describe inter-AS business relationships. Accordingly, the thesis proposes a new relationship inference algorithm to accurately capture both simple and complex AS relationships across two dimensions: prefix type, and geographic location. Validation against three sources of ground-truth data reveals that the proposed algorithm achieves a near-perfect accuracy. However, any inference approach is constrained by the inability of the existing topology data sources to provide a complete view of the inter-domain topology. To limit the topology incompleteness problem the author augments traditional BGP data with routing policy data obtained directly from IXPs to discover massive peering meshes which have thus far been largely invisible
    corecore