57 research outputs found

    Revisiting core traffic growth in the presence of expanding CDNs

    Get PDF
    Traffic growth forecasts announce a dramatic future for core networks, struggling to keep the pace of traffic augmentation. Internet traffic growth primarily stems from the proliferation of cloud services and the massive amounts of data distributed by the content delivery networks (CDNs) hosting these services. In this paper, we investigate the evolution of core traffic in the presence of growing CDNs. Expanding the capacities of existing data centers (DCs) directly translates the forecasted compound-annual-growth-rate (CAGR) of user traffic to the CAGR of carried core link traffic. On the other hand, expanding CDNs by building new geographically dispersed DCs can significantly reduce the predicted core traffic growth rates by placing content closer to the users. However, reducing DC-to-user traffic by building new DCs comes at a trade-off with increasing inter-DC content synchronization traffic. Thus, the resulting overall core traffic growth will depend on the types of services supported and their associated synchronization requirements. In this paper, we present a long-term evolution study to assess the implications of different CDN expansion strategies on core network traffic growth considering a mix of services in proportions and growth rates corresponding to well-known traffic forecasts. Our simulations indicate that CDNs may have significant incentive to build more DCs, depending on the service types they offer, and that current alarming traffic predictions may be somewhat overestimated in core networks in the presence of expanding CDNs. (C) 2019 The Authors. Published by Elsevier B.V.The research leading to these results has received funding from the European Commission for the H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727) and it has been partially funded by the Spanish national project ONOFRE-2(TEC2017-84423-C3-1-P, MINECO/AEI/FEDER, UE)

    The growing complexity of content delivery networks: Challenges and implications for the Internet ecosystem

    Get PDF
    Since the commercialization of the Internet, content and related applications, including video streaming, news, advertisements, and social interaction have moved online. It is broadly recognized that the rise of all of these different types of content (static and dynamic, and increasingly multimedia) has been one of the main forces behind the phenomenal growth of the Internet, and its emergence as essential infrastructure for how individuals across the globe gain access to the content sources they want. To accelerate the delivery of diverse content in the Internet and to provide commercial-grade performance for video delivery and the Web, Content Delivery Networks (CDNs) were introduced. This paper describes the current CDN ecosystem and the forces that have driven its evolution. We outline the different CDN architectures and consider their relative strengths and weaknesses. Our analysis highlights the role of location, the growing complexity of the CDN ecosystem, and their relationship to and implications for interconnection markets.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Content placement in 5G‐enabled edge/core data center networks resilient to link cut attacks

    Get PDF
    High throughput, resilience, and low latency requirements drive the development of 5G-enabled content delivery networks (CDNs) which combine core data centers (cDCs) with edge data centers (eDCs) that cache the most popular content closer to the end users for traffic load and latency reduction. Deployed over the existing optical network infrastructure, CDNs are vulnerable to link cut attacks aimed at disrupting the overlay services. Planning a CDN to balance the stringent service requirements and increase resilience to attacks in a cost-efficient way entails solving the content placement problem (CPP) across the cDCs and eDCs. This article proposes a framework for finding Pareto-optimal solutions with minimal user-to-content distance and maximal robustness to targeted link cuts, under a defined budget. We formulate two optimization problems as integer linear programming (ILP) models. The first, denoted as K-best CPP with minimal distance (K-CPP-minD), identifies the eDC/cDC placement solutions with minimal user-to-content distance. The second performs critical link set detection to evaluate the resilience of the K-CPP-minD solutions to targeted fiber cuts. Extensive simulations verify that the eDC/cDC selection obtained by our models improves network resilience to link cut attacks without adversely affecting the user-to-content distances or the core network traffic mitigation benefits.publishe

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    Characterizing the IoT ecosystem at scale

    Get PDF
    Internet of Things (IoT) devices are extremely popular with home, business, and industrial users. To provide their services, they typically rely on a backend server in- frastructure on the Internet, which collectively form the IoT Ecosystem. This ecosys- tem is rapidly growing and offers users an increasing number of services. It also has been a source and target of significant security and privacy risks. One notable exam- ple is the recent large-scale coordinated global attacks, like Mirai, which disrupted large service providers. Thus, characterizing this ecosystem yields insights that help end-users, network operators, policymakers, and researchers better understand it, obtain a detailed view, and keep track of its evolution. In addition, they can use these insights to inform their decision-making process for mitigating this ecosystem’s security and privacy risks. In this dissertation, we characterize the IoT ecosystem at scale by (i) detecting the IoT devices in the wild, (ii) conducting a case study to measure how deployed IoT devices can affect users’ privacy, and (iii) detecting and measuring the IoT backend infrastructure. To conduct our studies, we collaborated with a large European Internet Service Provider (ISP) and a major European Internet eXchange Point (IXP). They rou- tinely collect large volumes of passive, sampled data, e.g., NetFlow and IPFIX, for their operational purposes. These data sources help providers obtain insights about their networks, and we used them to characterize the IoT ecosystem at scale. We start with IoT devices and study how to track and trace their activity in the wild. We developed and evaluated a scalable methodology to accurately detect and monitor IoT devices with limited, sparsely sampled data in the ISP and IXP. Next, we conduct a case study to measure how a myriad of deployed devices can affect the privacy of ISP subscribers. Unfortunately, we found that the privacy of a substantial fraction of IPv6 end-users is at risk. We noticed that a single device at home that encodes its MAC address into the IPv6 address could be utilized as a tracking identifier for the entire end-user prefix—even if other devices use IPv6 privacy extensions. Our results showed that IoT devices contribute the most to this privacy leakage. Finally, we focus on the backend server infrastructure and propose a methodology to identify and locate IoT backend servers operated by cloud services and IoT vendors. We analyzed their IoT traffic patterns as observed in the ISP. Our analysis sheds light on their diverse operational and deployment strategies. The need for issuing a priori unknown network-wide queries against large volumes of network flow capture data, which we used in our studies, motivated us to develop Flowyager. It is a system built on top of existing traffic capture utilities, and it relies on flow summarization techniques to reduce (i) the storage and transfer cost of flow captures and (ii) query response time. We deployed a prototype of Flowyager at both the IXP and ISP.Internet-of-Things-Geräte (IoT) sind aus vielen Haushalten, Büroräumen und In- dustrieanlagen nicht mehr wegzudenken. Um ihre Dienste zu erbringen, nutzen IoT- Geräte typischerweise auf eine Backend-Server-Infrastruktur im Internet, welche als Gesamtheit das IoT-Ökosystem bildet. Dieses Ökosystem wächst rapide an und bie- tet den Nutzern immer mehr Dienste an. Das IoT-Ökosystem ist jedoch sowohl eine Quelle als auch ein Ziel von signifikanten Risiken für die Sicherheit und Privatsphäre. Ein bemerkenswertes Beispiel sind die jüngsten groß angelegten, koordinierten globa- len Angriffe wie Mirai, durch die große Diensteanbieter gestört haben. Deshalb ist es wichtig, dieses Ökosystem zu charakterisieren, eine ganzheitliche Sicht zu bekommen und die Entwicklung zu verfolgen, damit Forscher, Entscheidungsträger, Endnutzer und Netzwerkbetreibern Einblicke und ein besseres Verständnis erlangen. Außerdem können alle Teilnehmer des Ökosystems diese Erkenntnisse nutzen, um ihre Entschei- dungsprozesse zur Verhinderung von Sicherheits- und Privatsphärerisiken zu verbes- sern. In dieser Dissertation charakterisieren wir die Gesamtheit des IoT-Ökosystems indem wir (i) IoT-Geräte im Internet detektieren, (ii) eine Fallstudie zum Einfluss von benutzten IoT-Geräten auf die Privatsphäre von Nutzern durchführen und (iii) die IoT-Backend-Infrastruktur aufdecken und vermessen. Um unsere Studien durchzuführen, arbeiten wir mit einem großen europäischen Internet- Service-Provider (ISP) und einem großen europäischen Internet-Exchange-Point (IXP) zusammen. Diese sammeln routinemäßig für operative Zwecke große Mengen an pas- siven gesampelten Daten (z.B. als NetFlow oder IPFIX). Diese Datenquellen helfen Netzwerkbetreibern Einblicke in ihre Netzwerke zu erlangen und wir verwendeten sie, um das IoT-Ökosystem ganzheitlich zu charakterisieren. Wir beginnen unsere Analysen mit IoT-Geräten und untersuchen, wie diese im Inter- net aufgespürt und verfolgt werden können. Dazu entwickelten und evaluierten wir eine skalierbare Methodik, um IoT-Geräte mit Hilfe von eingeschränkten gesampelten Daten des ISPs und IXPs präzise erkennen und beobachten können. Als Nächstes führen wir eine Fallstudie durch, in der wir messen, wie eine Unzahl von eingesetzten Geräten die Privatsphäre von ISP-Nutzern beeinflussen kann. Lei- der fanden wir heraus, dass die Privatsphäre eines substantiellen Teils von IPv6- Endnutzern bedroht ist. Wir entdeckten, dass bereits ein einzelnes Gerät im Haus, welches seine MAC-Adresse in die IPv6-Adresse kodiert, als Tracking-Identifikator für das gesamte Endnutzer-Präfix missbraucht werden kann — auch wenn andere Geräte IPv6-Privacy-Extensions verwenden. Unsere Ergebnisse zeigten, dass IoT-Geräte den Großteil dieses Privatsphäre-Verlusts verursachen. Abschließend fokussieren wir uns auf die Backend-Server-Infrastruktur und wir schla- gen eine Methodik zur Identifizierung und Lokalisierung von IoT-Backend-Servern vor, welche von Cloud-Diensten und IoT-Herstellern betrieben wird. Wir analysier- ten Muster im IoT-Verkehr, der vom ISP beobachtet wird. Unsere Analyse gibt Auf- schluss über die unterschiedlichen Strategien, wie IoT-Backend-Server betrieben und eingesetzt werden. Die Notwendigkeit a-priori unbekannte netzwerkweite Anfragen an große Mengen von Netzwerk-Flow-Daten zu stellen, welche wir in in unseren Studien verwenden, moti- vierte uns zur Entwicklung von Flowyager. Dies ist ein auf bestehenden Netzwerkverkehrs- Tools aufbauendes System und es stützt sich auf die Zusammenfassung von Verkehrs- flüssen, um (i) die Kosten für Archivierung und Transfer von Flow-Daten und (ii) die Antwortzeit von Anfragen zu reduzieren. Wir setzten einen Prototypen von Flowyager sowohl im IXP als auch im ISP ein

    Comparative Modalities of Network Neutrality

    Get PDF
    This project examines the ongoing debate over internet content discrimination, more commonly referred to as network neutrality. It offers a new approach to examining this issue by combining a critical, political economy approach with Lawrence Lessig’s four modalities of regulation: policy, architecture, markets, and norms. It presents a critical, comparative case study analysis of how architecture, markets and norms have shaped United States policy along with comparative examples from select international case studies facing similar regulatory issues. Its findings suggest that while each of the four modalities plays a significant role in the regulation and persistence of network neutrality, there is a need for more clear, robust policy measures to address content discrimination online. Based on these analyses, the author offers policy recommendations for future network neutrality regulation

    Planificación y optimización de redes ópticas en el Internet del futuro

    Get PDF
    [SPA] Las estrictas exigencias requeridas por la futura red 5G, junto a las elevadas previsiones de crecimiento de tráfico IP, principalmente cimentadas en la proliferación de numerosos servicios basados en la nube, crean un panorama futuro lleno de incógnitas desde la perspectiva de las grandes redes ópticas de telecomunicaciones. Las tareas de planificación y optimización serán esenciales para asegurar que los requisitos pueden cumplirse de una manera económicamente viable. Esta tesis trata de analizar, en primer lugar, la validez de las predicciones de crecimiento tráfico, a la luz de las actuales aplicaciones y en un escenario donde los requisitos de latencia implican la evolución de los CDNs con el despliegue de micro-datacenters. Se analizará en particular el efecto de esta evolución en el tráfico soportado por las redes ópticas troncales. Segundo, la tesis estudia opciones que permitan abordar los requisitos esperados de la red, desde tres enfoques: a) optimización en el diseño y gestión de CDNs, b) control programable de la red basado, en redes definidas por software (SDN) y con virtualización en las funciones de red (NFV) y por último, c) posible introducción de tecnología SDM (Space Division Multiplexing) para expandir la capacidad de las redes de transporte como soporte efectivo del incremento de tráfico IP. Los resultados de este trabajo concluyen, en primera instancia, que la evolución del tráfico troncal en el Internet del futuro dependerá de la naturaleza de los servicios que las aplicaciones ofrezca, siendo el balance entre el tráfico de usuario y de sincronización especialmente determinante. Por otro lado, los resultados sugieren que un acercamiento del contenido a los usuarios puede tener el efecto neto de disminuir la cantidad de tráfico soportada por las redes core, en relación con las previsiones actuales. Los casos de uso analizados en el entorno SDN-NFV determinan la necesidad de optimización para proporcionar flexibilidad y programabilidad en la migración hacia sistemas virtualizados en las redes, siendo este conjunto de funcionalidades esenciales para satisfacer los requerimientos de los futuros servicios en el paradigma 5G. Además, las pruebas de concepto presentadas avalan la optimización conjunta de recursos de red e IT para la asistencia de service chains en redes basadas en SDN-NFV. Finalmente, para las redes ópticas basadas en SDM, la propuesta presentada de restricción de canal espacial (SCC) emerge como una opción válida a implementar en ROADMs para contextos flex-grid. La validez de tal propuesta reside en la reducción de coste de implementación dada por su menor complejidad y mayor disponibilidad de equipamiento, a costa de mínimas pérdidas en rendimiento, respecto de opciones totalmente permisivas.[ENG] The strict requirements required for the future 5G network, jointly with the high growth forecasts of IP traffic, mainly based on the proliferation of cloud services, create a future panorama full of uncertainties from the perspective of large optical telecommunications networks. Planning and optimization tasks are essential to ensure that the requirements are satisfied in an economically viable manner. This thesis tries to analyze, in the first place, the validity of traffic growth predictions, in the light of current applications in a scenario where the latency requirements imply the evolution of CDNs with the deployment of micro-datacenters. In particular, the effect of this traffic evolution, supported by the optical backbone networks, will be analyzed. Second, the thesis studies some options that allow addressing the expected requirements of the network, from the perspective of three major approaches: a) optimization in the design and management of CDNs, b) programmable control of the network based on software-defined networking and with virtualization in network functions and finally, c) possible introduction of the Space Division Multiplexing (SDM) technology to expand the capacity of transport networks as effective support for the increase of IP traffic. The outcomes of this work conclude, in the first instance, that the evolution of the backbone traffic in the future Internet will depend on the nature of the services that the applications offer, being the balance between user traffic and synchronization one especially determining. On the other hand, the results suggest that place the content close to the users can have direct effect of decreasing the amount of traffic supported by the core networks, in relation to the current forecasts. The use cases analyzed in the SDN-NFV environment determine the need for optimization providing flexibility and programmability in the migration to virtualized systems in the networks. This set of functionalities are essential to satisfy the requirements of the future services in the 5G paradigm. In addition, the proof of concept presented in this thesis supports the joint optimization of network and IT resources for the assistance of service chains in networks based on SDN-NFV. Finally, for SDM-based optical networks, the proposal presented for space channel restriction (SCC) emerges as a valid option to be implemented in ROADMs for flex-grid environments. The validity of this proposal lies in the significant reduction in terms of implementation cost due to its lower complexity and the current availability of the equipment, at the expense of minimal performance degradations, compared to totally permissive options.Escuela Internacional de Doctorado de la Universidad Politécnica de CartagenaUniversidad Politécnica de CartagenaPrograma de Doctorado en Tecnologías de la Información y las Comunicaciones por la Universidad Politécnica de Cartagen

    O Peer, Where Art Thou?:Uncovering Remote Peering Interconnections at IXPs

    Get PDF
    Internet eXchange Points (IXPs) are Internet hubs that provide the switching infrastructure to interconnect networks and exchange traffic. While the initial goal of IXPs was to bring together networks residing in the same city or country, and thus keep local traffic local, we observe that this model is gradually shifting. Many networks connect to IXPs without having physical presence at their switch(es). This practice, called Remote Peering, is changing the Internet topology and economy, and has become subject of a contentious debate within the network operators community. However, despite the increasing attention it is drawing, the understanding of the characteristics and impact of remote peering is limited. In this work, we remove the veil between remote peering and IXPs, by introducing and thoroughly validating a methodology for discovering remote peers at IXPs. We (i) infer remote peers globally, with high accuracy (>95%), (ii) study the evolution of remote peering in time, and (iii) evaluate its impact on Internet performance and resilience. We observe that remote peering is a significantly common practice in all the IXPs studied; for the largest IXPs, remote peers account for 40% of their member base. We also show that today IXP growth is mainly driven by remote peering, which contributes two times more than local peering

    Traffic Re-engineering: Extending Resource Pooling Through the Application of Re-feedback

    Get PDF
    Parallelism pervades the Internet, yet efficiently pooling this increasing path diversity has remained elusive. With no holistic solution for resource pooling, each layer of the Internet architecture attempts to balance traffic according to its own needs, potentially at the expense of others. From the edges, traffic is implicitly pooled over multiple paths by retrieving content from different sources. Within the network, traffic is explicitly balanced across multiple links through the use of traffic engineering. This work explores how the current architecture can be realigned to facilitate resource pooling at both network and transport layers, where tension between stakeholders is strongest. The central theme of this thesis is that traffic engineering can be performed more efficiently, flexibly and robustly through the use of re-feedback. A cross-layer architecture is proposed for sharing the responsibility for resource pooling across both hosts and network. Building on this framework, two novel forms of traffic management are evaluated. Efficient pooling of traffic across paths is achieved through the development of an in-network congestion balancer, which can function in the absence of multipath transport. Network and transport mechanisms are then designed and implemented to facilitate path fail-over, greatly improving resilience without requiring receiver side cooperation. These contributions are framed by a longitudinal measurement study which provides evidence for many of the design choices taken. A methodology for scalably recovering flow metrics from passive traces is developed which in turn is systematically applied to over five years of interdomain traffic data. The resulting findings challenge traditional assumptions on the preponderance of congestion control on resource sharing, with over half of all traffic being constrained by limits other than network capacity. All of the above represent concerted attempts to rethink and reassert traffic engineering in an Internet where competing solutions for resource pooling proliferate. By delegating responsibilities currently overloading the routing architecture towards hosts and re-engineering traffic management around the core strengths of the network, the proposed architectural changes allow the tussle surrounding resource pooling to be drawn out without compromising the scalability and evolvability of the Internet
    corecore