114 research outputs found

    Framework and Algorithms for Operator-Managed Content Caching

    Get PDF
    We propose a complete framework targeting operator-driven content caching that can be equally applied to both ISP-operated Content Delivery Networks (CDNs) and future Information-Centric Networks (ICNs). In contrast to previous proposals in this area, our solution leverages operators’ control on cache placement and content routing, managing to considerably reduce network operating costs by minimizing the amount of transit traffic and balancing load among available network resources. In addition, our solution provides two key advantages over previous proposals. First, it allows for a simple computation of the optimal cache placement. Second, it provides knobs for operators to fine-tune performance. We validate our design through both analytical modeling and trace-driven simulations and show that our proposed solution achieves on average twice as many cache hits in comparison to previously proposed techniques, without increasing delivery latency. In addition, we show that the proposed framework achieves 19-33% better load balancing across links and caching nodes, being also robust to traffic spikes

    A control and management architecture supporting autonomic NFV services

    Get PDF
    The proposed control, orchestration and management (COM) architecture is presented from a high-level point of view; it enables the dynamic provisioning of services such as network data connectivity or generic network slicing instances based on virtual network functions (VNF). The COM is based on Software Defined Networking (SDN) principles and is hierarchical, with a dedicated controller per technology domain. Along with the SDN control plane for the provisioning of connectivity, an ETSI NFV management and orchestration system is responsible for the instantiation of Network Services, understood in this context as interconnected VNFs. A key, novel component of the COM architecture is the monitoring and data analytics (MDA) system, able to collect monitoring data from the network, datacenters and applications which outputs can be used to proactively reconfigure resources thus adapting to future conditions, like load or degradations. To illustrate the COM architecture, a use case of a Content Delivery Network service taking advantage of the MDA ability to collect and deliver monitoring data is experimentally demonstrated.Peer ReviewedPostprint (author's final draft

    Big Data-backed video distribution in the telecom cloud

    Get PDF
    Telecom operators are starting the deployment of Content Delivery Networks (CDN) to better control and manage video contents injected into the network. Cache nodes placed close to end users can manage contents and adapt them to users' devices, while reducing video traffic in the core. By adopting the standardized MPEG-DASH technique, video contents can be delivered over HTTP. Thus, HTTP servers can be used to serve contents, while packagers running as software can prepare live contents. This paves the way for virtualizing the CDN function. In this paper, a CDN manager is proposed to adapt the virtualized CDN function to current and future demand. A Big Data architecture, fulfilling the ETSI NFV guide lines, allows controlling virtualized components while collecting and pre-processing data. Optimization problems minimize CDN costs while ensuring the highest quality. Re-optimization is triggered based on threshold violations; data stream mining sketches transform collected into modeled data and statistical linear regression and machine learning techniques are proposed to produce estimation of future scenarios. Exhaustive simulation over a realistic scenario reveals remarkable costs reduction by dynamically reconfiguring the CDN.Peer ReviewedPostprint (author's final draft

    Energy Efficiency of P2P and Distributed Clouds Networks

    Get PDF
    Since its inception, the Internet witnessed two major approaches to communicate digital content to end users: peer to peer (P2P) and client/server (C/S) networks. Both approaches require high bandwidth and low latency physical underlying networks to meet the users’ escalating demands. Network operators typically have to overprovision their systems to guarantee acceptable quality of service (QoS) and availability while delivering content. However, more physical devices led to more ICT power consumption over the years. An effective approach to confront these challenges is to jointly optimise the energy consumption of content providers and transportation networks. This thesis proposes a number of energy efficient mechanisms to optimise BitTorrent based P2P networks and clouds based C/S content distribution over IP/WDM based core optical networks. For P2P systems, a mixed integer linear programming (MILP) optimisation, two heuristics and an experimental testbed are developed to minimise the power consumption of IP/WDM networks that deliver traffic generated by an overlay layer of homogeneous BitTorrent users. The approach optimises peers’ selection where the goal is to minimise IP/WDM network power consumption while maximising peers download rate. The results are compared to typical C/S systems. We also considered Heterogeneous BitTorrent peers and developed models that optimise P2P systems to compensate for different peers behaviour after finishing downloading. We investigated the impact of core network physical topology on the energy efficiency of BitTorrent systems. We also investigated the power consumption of Video on Demand (VoD) services using CDN, P2P and hybrid CDN-P2P architectures over IP/WDM networks and addressed content providers efforts to balance the load among their data centres. For cloud systems, a MILP and a heuristic were developed to minimise content delivery induced power consumption of both clouds and IP/WDM networks. This was done by optimally determining the number, location and internal capability in terms of servers, LAN and storage of each cloud, subject to daily traffic variation. Different replication schemes were studied revealing that replicating content into multiple clouds based on content popularity is the optimum approach with respect to energy. The model was extended to study Storage as a Service (StaaS). We also studied the problem of virtual machine placement in IP/WDM networks and showed that VM Slicing is the best approach compared to migration and replication schemes to minimise energy. Finally, we have investigated the utilisation of renewable energy sources represented by solar cells and wind farms in BitTorrent networks and content delivery clouds, respectively. Comprehensive modelling and simulation as well as experimental demonstration were developed, leading to key contributions in the field of energy efficient telecommunications

    Design of Overlay Networks for Internet Multicast - Doctoral Dissertation, August 2002

    Get PDF
    Multicast is an efficient transmission scheme for supporting group communication in networks. Contrasted with unicast, where multiple point-to-point connections must be used to support communications among a group of users, multicast is more efficient because each data packet is replicated in the network – at the branching points leading to distinguished destinations, thus reducing the transmission load on the data sources and traffic load on the network links. To implement multicast, networks need to incorporate new routing and forwarding mechanisms in addition to the existing are not adequately supported in the current networks. The IP multicast are not adequately supported in the current networks. The IP multicast solution has serious scaling and deployment limitations, and cannot be easily extended to provide more enhanced data services. Furthermore, and perhaps most importantly, IP multicast has ignored the economic nature of the problem, lacking incentives for service providers to deploy the service in wide area networks. Overlay multicast holds promise for the realization of large scale Internet multicast services. An overlay network is a virtual topology constructed on top of the Internet infrastructure. The concept of overlay networks enables multicast to be deployed as a service network rather than a network primitive mechanism, allowing deployment over heterogeneous networks without the need of universal network support. This dissertation addresses the network design aspects of overlay networks to provide scalable multicast services in the Internet. The resources and the network cost in the context of overlay networks are different from that in conventional networks, presenting new challenges and new problems to solve. Our design goal are the maximization of network utility and improved service quality. As the overall network design problem is extremely complex, we divide the problem into three components: the efficient management of session traffic (multicast routing), the provisioning of overlay network resources (bandwidth dimensioning) and overlay topology optimization (service placement). The combined solution provides a comprehensive procedure for planning and managing an overlay multicast network. We also consider a complementary form of overlay multicast called application-level multicast (ALMI). ALMI allows end systems to directly create an overlay multicast session among themselves. This gives applications the flexibility to communicate without relying on service provides. The tradeoff is that users do not have direct control on the topology and data paths taken by the session flows and will typically get lower quality of service due to the best effort nature of the Internet environment. ALMI is therefore suitable for sessions of small size or sessions where all members are well connected to the network. Furthermore, the ALMI framework allows us to experiment with application specific components such as data reliability, in order to identify a useful set of communication semantic for enhanced data services

    Overlay networks for smart grids

    Get PDF

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    A quantitative survey of the power saving potential in IP-Over-WDM backbone networks

    Get PDF
    The power consumption in Information and Communication Technologies networks is growing year by year; this growth presents challenges from technical, economic, and environmental points of view. This has lead to a great number of research publications on "green" telecommunication networks. In response, a number of survey works have appeared as well. However, with respect to backbone networks, most survey works: 1) do not allow for an easy cross validation of the savings reported in the various works and 2) nor do they provide a clear overview of the individual and combined power saving potentials. Therefore, in this paper, we survey the reported saving potential in IP-over-WDM backbone telecommunication networks across the existing body of research in that area. We do this by mapping more than ten different approaches to a concise analytical model, which allows us to estimate the combined power reduction potential. Our estimates indicate that the power reduction potential of the once-only approaches is 2.3x in a Moderate Effort scenario and 31x in a Best Effort scenario. Factoring in the historic and projected yearly efficiency improvements ("Moore's law") roughly doubles both values on a ten-year horizon. The large difference between the outcome of Moderate Effort and Best Effort scenarios is explained by the disparity and lack of clarity of the reported saving results and by our (partly) subjective assessment of the feasibility of the proposed approaches. The Moderate Effort scenario will not be sufficient to counter the projected traffic growth, although the Best Effort scenario indicates that sufficient potential is likely available. The largest isolated power reduction potential is available in improving the power associated with cooling and power provisioning and applying sleep modes to overdimensioned equipment

    On the design of efficient caching systems

    Get PDF
    Content distribution is currently the prevalent Internet use case, accounting for the majority of global Internet traffic and growing exponentially. There is general consensus that the most effective method to deal with the large amount of content demand is through the deployment of massively distributed caching infrastructures as the means to localise content delivery traffic. Solutions based on caching have been already widely deployed through Content Delivery Networks. Ubiquitous caching is also a fundamental aspect of the emerging Information-Centric Networking paradigm which aims to rethink the current Internet architecture for long term evolution. Distributed content caching systems are expected to grow substantially in the future, in terms of both footprint and traffic carried and, as such, will become substantially more complex and costly. This thesis addresses the problem of designing scalable and cost-effective distributed caching systems that will be able to efficiently support the expected massive growth of content traffic and makes three distinct contributions. First, it produces an extensive theoretical characterisation of sharding, which is a widely used technique to allocate data items to resources of a distributed system according to a hash function. Based on the findings unveiled by this analysis, two systems are designed contributing to the abovementioned objective. The first is a framework and related algorithms for enabling efficient load-balanced content caching. This solution provides qualitative advantages over previously proposed solutions, such as ease of modelling and availability of knobs to fine-tune performance, as well as quantitative advantages, such as 2x increase in cache hit ratio and 19-33% reduction in load imbalance while maintaining comparable latency to other approaches. The second is the design and implementation of a caching node enabling 20 Gbps speeds based on inexpensive commodity hardware. We believe these contributions advance significantly the state of the art in distributed caching systems
    • 

    corecore