67 research outputs found

    Scaling Social Media Applications into Geo-Distributed Clouds

    Get PDF
    published_or_final_versio

    Move Big Data to the Cloud: an Online Cost-Minimizing Approach

    Get PDF
    published_or_final_versio

    Improving Anycast with Measurements

    Get PDF
    Since the first Distributed Denial-of-Service (DDoS) attacks were launched, the strength of such attacks has been steadily increasing, from a few megabits per second to well into the terabit/s range. The damage that these attacks cause, mostly in terms of financial cost, has prompted researchers and operators alike to investigate and implement mitigation strategies. Examples of such strategies include local filtering appliances, Border Gateway Protocol (BGP)-based blackholing and outsourced mitigation in the form of cloud-based DDoS protection providers. Some of these strategies are more suited towards high bandwidth DDoS attacks than others. For example, using a local filtering appliance means that all the attack traffic will still pass through the owner's network. This inherently limits the maximum capacity of such a device to the bandwidth that is available. BGP Blackholing does not have such limitations, but can, as a side-effect, cause service disruptions to end-users. A different strategy, that has not attracted much attention in academia, is based on anycast. Anycast is a technique that allows operators to replicate their service across different physical locations, while keeping that service addressable with just a single IP-address. It relies on the BGP to effectively load balance users. In practice, it is combined with other mitigation strategies to allow those to scale up. Operators can use anycast to scale their mitigation capacity horizontally. Because anycast relies on BGP, and therefore in essence on the Internet itself, it can be difficult for network engineers to fine tune this balancing behavior. In this thesis, we show that that is indeed the case through two different case studies. In the first, we focus on an anycast service during normal operations, namely the Google Public DNS, and show that the routing within this service is far from optimal, for example in terms of distance between the client and the server. In the second case study, we observe the root DNS, while it is under attack, and show that even though in aggregate the bandwidth available to this service exceeds the attack we observed, clients still experienced service degradation. This degradation was caused due to the fact that some sites of the anycast service received a much higher share of traffic than others. In order for operators to improve their anycast networks, and optimize it in terms of resilience against DDoS attacks, a method to assess the actual state of such a network is required. Existing methodologies typically rely on external vantage points, such as those provided by RIPE Atlas, and are therefore limited in scale, and inherently biased in terms of distribution. We propose a new measurement methodology, named Verfploeter, to assess the characteristics of anycast networks in terms of client to Point-of-Presence (PoP) mapping, i.e. the anycast catchment. This method does not rely on external vantage points, is free of bias and offers a much higher resolution than any previous method. We validated this methodology by deploying it on a testbed that was locally developed, as well as on the B root DNS. We showed that the increased \textit{resolution} of this methodology improved our ability to assess the impact of changes in the network configuration, when compared to previous methodologies. As final validation we implement Verfploeter on Cloudflare's global-scale anycast Content Delivery Network (CDN), which has almost 200 global Points-of-Presence and an aggregate bandwidth of 30 Tbit/s. Through three real-world use cases, we demonstrate the benefits of our methodology: Firstly, we show that changes that occur when withdrawing routes from certain PoPs can be accurately mapped, and that in certain cases the effect of taking down a combination of PoPs can be calculated from individual measurements. Secondly, we show that Verfploeter largely reinstates the ping to its former glory, showing how it can be used to troubleshoot network connectivity issues in an anycast context. Thirdly, we demonstrate how accurate anycast catchment maps offer operators a new and highly accurate tool to identify and filter spoofed traffic. Where possible, we make datasets collected over the course of the research in this thesis available as open access data. The two best (open) dataset awards that were awarded for these datasets confirm that they are a valued contribution. In summary, we have investigated two large anycast services and have shown that their deployments are not optimal. We developed a novel measurement methodology, that is free of bias and is able to obtain highly accurate anycast catchment mappings. By implementing this methodology and deploying it on a global-scale anycast network we show that our method adds significant value to the fast-growing anycast CDN industry and enables new ways of detecting, filtering and mitigating DDoS attacks

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    Framework and Algorithms for Operator-Managed Content Caching

    Get PDF
    We propose a complete framework targeting operator-driven content caching that can be equally applied to both ISP-operated Content Delivery Networks (CDNs) and future Information-Centric Networks (ICNs). In contrast to previous proposals in this area, our solution leverages operators’ control on cache placement and content routing, managing to considerably reduce network operating costs by minimizing the amount of transit traffic and balancing load among available network resources. In addition, our solution provides two key advantages over previous proposals. First, it allows for a simple computation of the optimal cache placement. Second, it provides knobs for operators to fine-tune performance. We validate our design through both analytical modeling and trace-driven simulations and show that our proposed solution achieves on average twice as many cache hits in comparison to previously proposed techniques, without increasing delivery latency. In addition, we show that the proposed framework achieves 19-33% better load balancing across links and caching nodes, being also robust to traffic spikes

    Dynamic service placement in geographically distributed clouds

    Get PDF
    Abstract-Large-scale online service providers have been increasingly relying on geographically distributed cloud infrastructures for service hosting and delivery. In this context, a key challenge faced by service providers is to determine the locations where service applications should be placed such that the hosting cost is minimized while key performance requirements (e.g. response time) are assured. Furthermore, the dynamic nature of both demand pattern and infrastructure cost favors a dynamic solution to this problem. Currently most of the existing solutions for service placement have either ignored dynamics, or provided inadequate solutions that achieve both objectives at the same time. In this paper, we present a framework for dynamic service placement problems based on control-and game-theoretic models. In particular, we present a solution that optimizes the desired objective dynamically over time according to both demand and resource price fluctuations. We further consider the case where multiple service providers compete for resource in a dynamic manner, and show that there is a Nash equilibrium solution which is socially optimal. Using simulations based on realistic topologies, demand and resource prices, we demonstrate the effectiveness of our solution in realistic settings

    Machine Learning for Next-generation Content Delivery Networks: Deployment, Content Placement, and Performance Management

    Get PDF
    With the explosive demands for data and the growth in mobile users, content delivery networks (CDNs) are facing ever-increasing challenges to meet end-users quality-of-experience requirements, ensure scalability and remain cost-effective. These challenges encourage CDN providers to seek a solution by considering the new technologies available in today’s computer network domain. Network Function Virtualization (NFV) is a relatively new network service deployment technology used in computer networks. It can reduce capital and operational costs while yielding flexibility and scalability for network operators. Thanks to the NFV, the network functions that previously could be offered only by specific hardware appliances can now run as Virtualized Network Functions (VNF) on commodity servers or switches. Moreover, a network service can be flexibly deployed by a chain of VNFs, a structure known as the VNF Forwarding Graph or VNF-FG. Considering these advantages, the next-generation CDN will be deployed using NFV infrastructure. However, using NFV for service deployment is challenging as resource allocation in a shared infrastructure is not easy. Moreover, the integration of other paradigms (e.g., edge computing and vehicular network) into CDN will compound the complexity of content placement and performance management for the next-generation CDNs. In this regard, due to their impacts on final service and end-user perceived quality, the challenges in service deployment, content placement, and performance management should be addressed carefully. In this thesis, advanced machine learning methods are utilized to provide algorithmic solutions for the abovementioned challenges of the next generation CDNs. Regarding the challenges in the deployment of the next-generation CDNs, we propose two deep reinforcement learning-based methods addressing the joint problems of VNF-FG’s composition and embedding, as well as function scaling and topology adaptation. As for content placement challenges, a deep reinforcement learning-based approach for content migration in an edge-based CDN with vehicular nodes is proposed. The proposed approach takes advantage of the available caching resources in the proximity of the full local caches and efficiently migrates contents at the edge of the network. Moreover, for managing the performance quality of an operating CDN, an unsupervised machine learning anomaly detection method is provided. The proposed method uses clustering to enable easier performance analysis for next-generation CDNs. Each proposed method in this thesis is evaluated by comparison to the state-of-the-art approaches. Moreover, when applicable, the optimality gaps of the proposed methods are investigated as well
    • …
    corecore