107 research outputs found

    The State of Network Neutrality Regulation

    Get PDF
    The Network Neutrality (NN) debate refers to the battle over the design of a regulatory framework for preserving the Internet as a public network and open innovation platform. Fueled by concerns that broadband access service providers might abuse network management to discriminate against third party providers (e.g., content or application providers), policymakers have struggled with designing rules that would protect the Internet from unreasonable network management practices. In this article, we provide an overview of the history of the debate in the U.S. and the EU and highlight the challenges that will confront network engineers designing and operating networks as the debate continues to evolve.BMBF, 16DII111, Verbundprojekt: Weizenbaum-Institut fĂĽr die vernetzte Gesellschaft - Das Deutsche Internet-Institut; Teilvorhaben: Wissenschaftszentrum Berlin fĂĽr Sozialforschung (WZB)EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    A Model-Based Simulator for Content Delivery Network using SimEvents MATLAB-Simulink

    Get PDF
    Abstract—Simulation methodology has become popular among network researchers due to the availability of various sophisticated and the flexibility of simulation packages in model construction and the result’s validation. The objective of our research is to introduce a discrete simulator tools known as SimEvents toolbox that runs on MATLAB-Simulink with the advantages of graphical user interface (GUI) system and a total customized simulator design. In this paper, we design a model-based network simulator named as “CDNlink”. CDNlink is designed based on a queuing model and functioned to study the content delivery networks (CDN). As a result, we have designed the simulator for Client, Router, DNS, and Server. Moreover, we simulate the CDN system for packet round trip time (RTT) and server’s queue length.Keywords—simulink, content delivery network, model-based simulation, simevents, cdnlink

    Livraison de contenus sur un réseau hybride satellite / terrestre

    Get PDF
    L’augmentation et le renforcement des usages d’Internet rend nécessaire l’évolution des réseaux existants. Cependant, on constate de fortes inégalités entre les zones urbaines, bien desservies et qui concentrent l’essentiel des investissements, et les zones rurales, mal desservies etdélaissées. Face à cette situation, les utilisateurs de ces zones se tournent vers d’autres moyensd’accès, et notamment vers les accès Internet par satellite. Cependant, ces derniers souffrentd’une limitation qui est le délai important induit par le temps de propagation du signal entre la terre et l’orbite géostationnaire. Dans cette thèse, nous nous intéressons à l’utilisation simultanée d’un réseau d’accès terrestre, caractérisé par un faible débit et un faible délai, et d’un réseau d’accès satellite, caractérisé par une forte latence et un débit plus important. D’autre part, les réseaux dediffusion de contenus ou CDNs, constitués d’un grand nombre de serveurs de cache, apportentune réponse à l’augmentation du trafic et des besoins en termes de latence et de débit.Cependant, localisés dans les réseaux de cœur, les caches restent éloignés des utilisateurs etn’atteignent pas les réseaux d’accès. Ainsi, les fournisseurs d’accès Internet (FAI) se sontintéressés au déploiement de ces serveurs au sein de leur propre réseau, que l’on appelle alorsTelCo CDN. La diffusion des contenus nécessite idéalement l’interconnexion des opérateurs CDNavec les TelCo CDNs, permettant ainsi la délégation de la diffusion à ces derniers. Ils sont alorsen mesure d’optimiser la diffusion des contenus sur leur réseau dont ils ont une meilleureconnaissance. Ainsi, nous nous intéresserons à l’optimisation de la livraison de contenus sur unréseau hybride satellite / terrestre intégré à une chaîne de livraison CDN. Nous nous attacheronsdans un premier temps à décrire une architecture permettant, grâce à l’interconnexion de CDNs,de prendre en charge la diffusion des contenus sur le réseau hybride. Dans un second temps,nous étudierons l’intérêt de la connaissance des informations apportées par le contexte CDN pour le routage sur une telle architecture. Dans ce cadre, nous proposerons un mécanisme de routage fondé sur la taille des contenus. Finalement, nous montrerons la supériorité de notre approche sur l’utilisation du protocole de transport multichemin MP-TC

    Reducing service creation time leveraging on network function virtualization

    Get PDF
    Fifth-generation (5G) networks are envisioned to simultaneously support several services with different connectivity requirements. In this respect, service creation time is a key performance indicator (KPI) for service providers when planning the migration to 5G. For example, the European 5G infrastructure public private partnership (5G-PPP) suggests to reduce this time from 90 hours to 90 minutes, in the different phases of the service creation time KPI identified by this organization. This reduction can be achieved by leveraging on 5G state-of-the-art technologies: network function virtualization, network slicing, software-defined networking, and cloud computing, among others. Although some authors and projects have already studied the service creation time KPI in 5G, there is no literature that comprehensively analyzes and presents results related to each phase of this KPI. In this article, we explore the potential of network function virtualization technologies to reduce service creation time. To this end, we investigate the various phases of the service creation time KPI by designing and implementing, a realistic as well as complex network service that leverages on network function virtualization and related technologies. For our use case, we chose a content delivery network service specifically designed to distribute video. This decision was based on an analysis where we considered several parameters, like the complexity in the phases of design, fulfillment, and service assurance. We dissected all phases of the service creation time KPI required to turn our service blueprint into a deployment by utilizing network function virtualization tools. Henceforth, we defined and conducted several experiments, which were oriented to analyzing the different phases of the service creation time KPI. After analyzing the obtained results, we can conclude that using these new tools permits a substantial reduction in the time taken by each phase of the service creation time KPI.publishe

    Content delivery network for secure of software defined networking by using IPv4, OpenFlow, and ALTO

    Get PDF
    Software defined networking is a programmability function network by easiness for maintenance and configuration. The administrators of network can change the traffic rules during the commuting process. SDN is an arising network structure with programmability and centralization and this leads to introduce potential security concerns. Though the TLS ability support secure for control plane but computationally aggravating and complex to configure as well as not compatible with OpenFlow protocol. For this reason, a content delivery network can be used to increase the ability of network services dynamically and automatically. In order that relieve the threat we proposed architecture for SDN depending on CDN. In our proposed architecture, we use application layer traffic optimization (ALTO) protocol to be as servers enable mapping for the network to produce a summarized vision. We also hide the identity of the forwarding devices by take advantage of IPv4 and OpenFlow transaction identification fields into the control packets through implement of two authentication structures via efficient Salsa20 stream cipher. Finally, the work results explain the proposed architecture can efficiently eliminate of attack types and provide more detectability to attackers

    Global evaluation of CDNs performance using PlanetLab

    Get PDF
    Since they were introduced in the market, Content Distribution Networks (CDNs) have been increasing their importance due to the “instantaneity” requirements pretended by nowadays web users. Thanks to the increment in the access speed, especially in the last mile with technologies such as xDSL, HFC, FTTH, the loading time has been reduced. However the “instantaneity” those users want could not be obtained without techniques such as caches and content distribution due to CDNs. These techniques aim to avoid fetching web objects from origin web server, especially in “heavy” objects such as multimedia files. CDN provides not only a clever way of distributing content in a globally, but also preventing problems such as the “flash crowd events”. This kind of situation could provoke huge monetary losses because it attacks the bottleneck introduced by clustering servers to reach scalability. The CDN leader provider is Akamai, and one of the most important decisions a CDN should perform is deciding witch of the available servers is the best one a user could use to be able to fetch a specific web object. This best server selection employs a technique based on DNS with the objective of mapping the IP address with the best available server in terms of latency. The current project presents a global performance of Akamai server selection technique using tools such as PlanetLab and Httperf. Different tests were done with the objective of comparing the results of the global distributed users to identify those areas where Akamai perform in a suitable way. To determinate this, the results obtained with Akamai were also compared with a non-CDN distribution web page. Finally a linear correlation between the latencies measured and the number of hops was identified.Castellà: Desde que fueron introducidas en el mercado las Redes de Distribución de Contenidos (CDN) ha incrementado su importancia debido a la tendencia de “instantaneidad” en la carga de las páginas web que actualmente pretenden los usuarios de Internet. Gracias al incremento en las velocidades de acceso sobretodo en la última milla con tecnologías como xDSL, HFC, FTTH, la velocidad de carga de las páginas webs se ha incrementado. Sin embargo esta “instantaneidad” ha sido posible gracias a diferentes técnicas como la utilización de caches y distribución de contenidos vía CDN. Estas técnicas tienen como objetivo evitar que la carga de los objetos web más “pesados” (como pueden ser los archivos multimedia) se haga desde el servidor origen. Las CDN proporcionan no sólo una forma efectiva de distribuir los contenidos de una manera global sino que también resuelven problemas como los “flash crowd events” que pueden llegar a ocasionar enormes perdidas monetarias debido a la inoperatividad que generan en la web origen. Uno de los proveedores más importantes de CDNs es Akamai y una de las decisiones más importantes que una CDN debe realizar es seleccionar el mejor servidor disponible en cierto instante de tiempo, para que un usuario pueda acceder al objeto web deseado. Para esto se utilizan técnicas basadas en DNS con el objetivo de “mappear” la dirección IP del servidor que presente mejor latencia. Este proyecto presenta una evaluación de performance, sobre la técnica de selección del mejor servidor que utiliza Akamai. Su comportamiento es evaluando de manera global gracias a la utilización de herramientas como PlanetLab y Httperf. En el mismo, se realizan diferentes pruebas que hacen hincapié en comparar los resultados desde puntos ubicados en diferentes zonas del planeta para así poder concluir en que zonas Akamai tiene mejor respuesta. Para ello se compararon los resultados obtenidos con una web que utiliza la CDN de Akamai con otra que no utiliza distribución de contenidos a través de CDN. Finalmente se trata de identificar una correlación entre las respuestas de latencia y cantidad de “hops”

    Development of a system compliant with the Application-Layer Traffic Optimization Protocol

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaWith the ever-increasing Internet usage that is following the start of the new decade, the need to optimize this world-scale network of computers becomes a big priority in the technological sphere that has the number of users rising, as are the Quality of Service (QoS) demands by applications in domains such as media streaming or virtual reality. In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for ISP preferences, as exemplified by their lack of care in achieving traffic locality during their operation, which would be a preferable feature for network administrators, and that could also improve application performance. However, even a best-effort attempt by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in helping achieve a mutually beneficial scenario. The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network status information and administrative preferences. Sharing of infrastructural insight is done with the intent of enabling a cooperative environment, between the network overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs. This work gives an overview of the historical network tussle between applications and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o início da nova década, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade na esfera tecnológica que vê o seu número de utilizadores a aumentar, assim como a exigência, por parte das aplicações, de novos padrões de Qualidade de Serviço (QoS), como visto em domínios de transmissão de conteúdo multimédia em tempo real e em experiências de realidade virtual. Face ao aumento de tráfego e aos padrões de exigência aplicacional mais restritos, é necessário melhor compreender como os fornecedores de serviços Internet (ISPs) devem gerir os seus recursos. Um ponto fulcral é como aplicações utilizam os seus recursos da rede, onde muitas destas não têm consideração pelas preferências dos ISPs, como exemplificado pela sua falta de esforço em localizar tráfego, onde o contrário seria preferível por administradores de rede e teria potencial para melhorar o desempenho aplicacional. Uma tentativa de melhor esforço, por parte das aplicações, em resolver este problema, não será bem-sucedida se as preferências administrativas não forem claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação entre camadas pode potenciar um cenário mutuamente benéfico. O foco principal desta tese é o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar estandardizações para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritárias disponibilizam recursos com informação de estado de rede, e preferências administrativas. A partilha de conhecimento infraestrutural é feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais. É pretendido dar uma visão da histórica disputa entre aplicações e ISPs, assim como apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiência do sistema numa simulação, quando comparado com alternativas clássicas

    Content-aware Traffic Engineering

    Get PDF
    Also appears as TU-Berlin technical report 2012-3, ISSN: 1436-9915Also appears as TU-Berlin technical report 2012-3, ISSN: 1436-9915Today, a large fraction of Internet traffic is originated by Content Providers (CPs) such as content distribution networks and hyper-giants. To cope with the increasing demand for content, CPs deploy massively distributed infrastructures. This poses new challenges for CPs as they have to dynamically map end-users to appropriate servers, without being fully aware of network conditions within an ISP as well as the end-users network locations. Furthermore, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection process of CPs. In this paper, we argue that the challenges that CPs and ISPs face separately today can be turned into an opportunity. We show how they can jointly take advantage of the deployed distributed infrastructures to improve their operation and end-user performance. We propose Content-aware Traffic Engineering (CaTE), which dynamically adapts the traffic demand for content hosted on CPs by utilizing ISP network information and end-user location during the server selection process. As a result, CPs enhance their end-user to server mapping and improve end-user experience, thanks to the ability of network-informed server selection to circumvent network bottlenecks. In addition, ISPs gain the ability to partially influence the traffic demands in their networks. Our results with operational data show improvements in path length and delay between end-user and the assigned CP server, network wide traffic reduction of up to 15%, and a decrease in ISP link utilization of up to 40% when applying CaTE to traffic delivered by a small number of major CPs
    • …
    corecore