248 research outputs found

    Characterizing a Meta-CDN

    Full text link
    CDNs have reshaped the Internet architecture at large. They operate (globally) distributed networks of servers to reduce latencies as well as to increase availability for content and to handle large traffic bursts. Traditionally, content providers were mostly limited to a single CDN operator. However, in recent years, more and more content providers employ multiple CDNs to serve the same content and provide the same services. Thus, switching between CDNs, which can be beneficial to reduce costs or to select CDNs by optimal performance in different geographic regions or to overcome CDN-specific outages, becomes an important task. Services that tackle this task emerged, also known as CDN broker, Multi-CDN selectors, or Meta-CDNs. Despite their existence, little is known about Meta-CDN operation in the wild. In this paper, we thus shed light on this topic by dissecting a major Meta-CDN. Our analysis provides insights into its infrastructure, its operation in practice, and its usage by Internet sites. We leverage PlanetLab and Ripe Atlas as distributed infrastructures to study how a Meta-CDN impacts the web latency

    The State of Network Neutrality Regulation

    Get PDF
    The Network Neutrality (NN) debate refers to the battle over the design of a regulatory framework for preserving the Internet as a public network and open innovation platform. Fueled by concerns that broadband access service providers might abuse network management to discriminate against third party providers (e.g., content or application providers), policymakers have struggled with designing rules that would protect the Internet from unreasonable network management practices. In this article, we provide an overview of the history of the debate in the U.S. and the EU and highlight the challenges that will confront network engineers designing and operating networks as the debate continues to evolve.BMBF, 16DII111, Verbundprojekt: Weizenbaum-Institut für die vernetzte Gesellschaft - Das Deutsche Internet-Institut; Teilvorhaben: Wissenschaftszentrum Berlin für Sozialforschung (WZB)EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Steering hyper-giants' traffic at scale

    Get PDF
    Large content providers, known as hyper-giants, are responsible for sending the majority of the content traffic to consumers. These hyper-giants operate highly distributed infrastructures to cope with the ever-increasing demand for online content. To achieve 40 commercial-grade performance of Web applications, enhanced end-user experience, improved reliability, and scaled network capacity, hyper-giants are increasingly interconnecting with eyeball networks at multiple locations. This poses new challenges for both (1) the eyeball networks having to perform complex inbound traffic engineering, and (2) hyper-giants having to map end-user requests to appropriate servers. We report on our multi-year experience in designing, building, rolling-out, and operating the first-ever large scale system, the Flow Director, which enables automated cooperation between one of the largest eyeball networks and a leading hyper-giant. We use empirical data collected at the eyeball network to evaluate its impact over two years of operation. We find very high compliance of the hyper-giant to the Flow Director’s recommendations, resulting in (1) close to optimal user-server mapping, and (2) 15% reduction of the hyper-giant’s traffic overhead on the ISP’s long-haul links, i.e., benefits for both parties and end-users alike.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery

    Full text link
    In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, an increasing share of content delivery services adopt encryption through HTTPS, which is not compatible with traditional ISP-managed approaches like transparent and proxy caching. This raises the need for solutions involving both Internet Service Providers (ISP) and Content Providers (CP): by design, the solution should preserve business-critical CP information (e.g., content popularity, user preferences) on the one hand, while allowing for a deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells) on the other hand. In this paper we address this issue by considering a content-oblivious ISP-operated cache. The ISP allocates the cache storage to various content providers so as to maximize the bandwidth savings provided by the cache: the main novelty lies in the fact that, to protect business-critical information, ISPs only need to measure the aggregated miss rates of the individual CPs and do not need to be aware of the objects that are requested, as in classic caching. We propose a cache allocation algorithm based on a perturbed stochastic subgradient method, and prove that the algorithm converges close to the allocation that maximizes the overall cache hit rate. We use extensive simulations to validate the algorithm and to assess its convergence rate under stationary and non-stationary content popularity. Our results (i) testify the feasibility of content-oblivious caches and (ii) show that the proposed algorithm can achieve within 10\% from the global optimum in our evaluation

    Privacy issues of ISPs in the modern web

    Get PDF
    In recent years, privacy issues in the networking field are getting more important. In particular, there is a lively debate about how Internet Service Providers (ISPs) should collect and treat data coming from passive network measurements. This kind of information, such as flow records or HTTP logs, carries considerable knowledge from several points of view: traffic engineering, academic research, and web marketing can take advantage from passive network measurements on ISP customers. Nevertheless, in many cases collected measurements contain personal and confidential information about customers exposed to monitoring, thus raising several ethical issues. Modern web is very different from the one we experienced few years ago: web services converged to few protocols (i.e., HTTP and HTTPS) and a large share of traffic is encrypted. The aim of this work is to provide an insight about which information is still visible to ISPs, with particular attention to novel and emerging protocols, and to what extent it carries personal information. We illustrate that sensible information, such as website history, is still exposed to passive monitoring. We illustrate privacy and ethical issues deriving by the current situation and provide general guidelines and best practices to cope with the collection of network traffic measurements

    The growing complexity of content delivery networks: Challenges and implications for the Internet ecosystem

    Get PDF
    Since the commercialization of the Internet, content and related applications, including video streaming, news, advertisements, and social interaction have moved online. It is broadly recognized that the rise of all of these different types of content (static and dynamic, and increasingly multimedia) has been one of the main forces behind the phenomenal growth of the Internet, and its emergence as essential infrastructure for how individuals across the globe gain access to the content sources they want. To accelerate the delivery of diverse content in the Internet and to provide commercial-grade performance for video delivery and the Web, Content Delivery Networks (CDNs) were introduced. This paper describes the current CDN ecosystem and the forces that have driven its evolution. We outline the different CDN architectures and consider their relative strengths and weaknesses. Our analysis highlights the role of location, the growing complexity of the CDN ecosystem, and their relationship to and implications for interconnection markets.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Service-centric networking for distributed heterogeneous clouds

    Get PDF
    Optimal placement and selection of service instances in a distributed heterogeneous cloud is a complex trade-off between application requirements and resource capabilities that requires detailed information on the service, infrastructure constraints, and the underlying IP network. In this article we first posit that from an analysis of a snapshot of today's centralized and regional data center infrastructure, there is a sufficient number of candidate sites for deploying many services while meeting latency and bandwidth constraints. We then provide quantitative arguments why both network and hardware performance needs to be taken into account when selecting candidate sites to deploy a given service. Finally, we propose a novel architectural solution for service-centric networking. The resulting system exploits the availability of fine-grained execution nodes across the Internet and uses knowledge of available computational and network resources for deploying, replicating and selecting instances to optimize quality of experience for a wide range of services

    Predictive CDN Selection for Video Delivery Based on LSTM Network Performance Forecasts and Cost-Effective Trade-Offs

    Get PDF
    Owing to increasing consumption of video streams and demand for higher quality content and more advanced displays, future telecommunication networks are expected to outperform current networks in terms of key performance indicators (KPIs). Currently, content delivery networks (CDNs) are used to enhance media availability and delivery performance across the Internet in a cost-effective manner. The proliferation of CDN vendors and business models allows the content provider (CP) to use multiple CDN providers simultaneously. However, extreme concurrency dynamics can affect CDN capacity, causing performance degradation and outages, while overestimated demand affects costs. 5G standardization communities envision advanced network functions executing video analytics to enhance or boost media services. Network accelerators are required to enforce CDN resilience and efficient utilization of CDN assets. In this regard, this study investigates a cost-effective service to dynamically select the CDN for each session and video segment at the Media Server, without any modification to the video streaming pipeline being required. This service performs time series forecasts by employing a Long Short-Term Memory (LSTM) network to process real time measurements coming from connected video players. This service also ensures reliable and cost-effective content delivery through proactive selection of the CDN that fits with performance and business constraints. To this end, the proposed service predicts the number of players that can be served by each CDN at each time; then, it switches the required players between CDNs to keep the (Quality of Service) QoS rates or to reduce the CP's operational expenditure (OPEX). The proposed solution is evaluated by a real server, CDNs, and players and delivering dynamic adaptive streaming over HTTP (MPEG-DASH), where clients are notified to switch to another CDN through a standard MPEG-DASH media presentation description (MPD) update mechanismThis work was supported in part by the EC projects Fed4Fire+, under Grant 732638 (H2020-ICT-13-2016, Research and Innovation Action), and in part by Open-VERSO project (Red Cervera Program, Spanish Government's Centre for the Development of Industrial Technology
    corecore