15 research outputs found

    Energy-Efficient Service Function Chain Provisioning

    Get PDF
    International audienceNetwork Function Virtualization (NFV) is a promising network architecture concept to reduce operational costs. In legacy networks, network functions, such as firewall or TCP optimization, are performed by specific hardware. In networks enabling NFV coupled with the Software Defined Network (SDN) paradigm, network functions can be implemented dynamically on generic hardware. This is of primary interest to implement energy efficient solutions, in order to adapt dynamically the resource usage to the demands. In this paper, we study how to use NFV coupled with SDN to improve the energy efficiency of networks. We consider a setting in which a flow has to go through a Service Function Chain, that is several network functions in a specific order. We propose a decomposition model that relies on chaining and function placement configurations to solve the problem. We show that virtualization allows to obtain between 15% to 62 % of energy savings for networks of different sizes

    Power-Aware Routing and Network Design with Bundled Links: Solutions and Analysis

    Get PDF
    The paper deeply analyzes a novel network-wide power management problem, called Power-Aware Routing and Network Design with Bundled Links (PARND-BL), which is able to take into account both the relationship between the power consumption and the traffic throughput of the nodes and to power off both the chassis and even the single Physical Interface Card (PIC) composing each link. The solutions of the PARND-BL model have been analyzed by taking into account different aspects associated with the actual applicability in real network scenarios: (i) the time for obtaining the solution, (ii) the deployed network topology and the resulting topology provided by the solution, (iii) the power behavior of the network elements, (iv) the traffic load, (v) the QoS requirement, and (vi) the number of paths to route each traffic demand. Among the most interesting and novel results, our analysis shows that the strategy of minimizing the number of powered-on network elements through the traffic consolidation does not always produce power savings, and the solution of this kind of problems, in some cases, can lead to spliting a single traffic demand into a high number of paths

    Chaîne de services efficaces en énergie grâce à la virtualisation des fonctions réseaux

    Get PDF
    Service Function Chains (SFCs) are an ordered sequence of network functions, such as firewall. Using the new approaches of Software Defined Networks and of Network Function Virtualization (NFV), the network functions can be virtualized and executed on generic hardware. To optimize network management, it is thus crucial to place dynamically the network functions at the right positions in the network according to the network traffic. In this paper, we consider the problem of SFC placement with the goal of minimizing network energy consumption. We model the problem as an Integer Linear Program, which can be used to solve small instances. To solve larger instances, we propose GreenChains, a heuristic algorithm. We exhibit the benefit of dynamic routing and of NFV on the energy savings. We show that between 30 to 55% of energy can be saved for typical ISP networks, while respecting the SFC constraints.Les chaînes de fonctions de service (SFC) sont une séquence ordonnée de fonctions réseaux. En utilisant les nouvelles approches des réseaux logiciels (Software Defined Networks en anglais) et de virtualisation des fonctions réseaux (NFV), les fonctions réseaux peuvent être virtualisée et exécutée sur des équipements génériques. Pour optimiser la gestion des réseaux, il est crucial de placer de façon dynamique les fonctions réseaux aux positions adéquates dans le réseau en fonction du traffic. Dans ce papier, nous considérons le problème de placement de SFC avec l’objectif de minimiser la consommation énergétique des réseaux. Nous modélisons le problème par un programme linéaire en nombres entiers qui peut résoudre de petites instances. Pour des instances plus grandes, nous proposons GreenChains, un algorithme heuristique. Nous exhibons les bénéfices du routage dynamique et de NFV sur les gains en énergie. Nous montrons que de 30 à 50% d’énergie peut être sauvée pour des réseaux typiques d’opérateurs tout en respectant les contraintes des SFC

    Affectation économe en énergie de chaînes de services

    Get PDF
    Network Function Virtualization (NFV) is a promising network architecture concept to reduce operational costs. In legacy networks, network functions, such as firewall or TCP optimization, are performed by specific hardware. In networks enabling NFV coupled with the Software Defined Network (SDN) paradigm, network functions can be implemented dynamically on generic hardware. This is of primary interest to implement energy efficient solutions, which imply to adapt dynamically the resource usage to the demands. In this paper, we study how to use NFV coupled with SDN to improve the energy efficiency of networks. We consider a setting in which a flow has to go through a Service Function Chain, that is several network functions in a specific order. We propose a decomposition model that relies on lightpath configuration to solve the problem. We show that virtualization allows to obtain between 30% to 55 % of energy savings for networks of different sizes.La virtualisation des fonctions réseaux (NFV) est un concept d’architecture réseaux prometteur pour réduire les coûts opérationnels. Dans les réseaux traditionnels, les fonctions réseaux, comme un firewall ou un optimisateur TCP, sont effectués sur des équipements spé- cifiques. Dans les réseaux permettant la NFV ainsi que les réseaux logiciels (Software Defined Networks), les fonctions réseaux peuvent être implémentée dynamiquement sur des équipements génériques. Cela est très prometteur pour mettre en pratique des solutions efficaces en énergie qui implique une adaptation dynamique des ressources aux demandes. Dans ce paper, nous étudions comment utiliser les technologies NFV et SDN pour améliorer l’éfficacité énergétique des réseaux. Nous considérons un cadre dans lequel un flôt doit passer au travers d’une chaîne de fonctions de service, c’est-à-dire par une séquence de fonctions réseaux dans un ordre donné. Nous proposons un modèle de décomposition pour résoudre le problème. Nous montrons que la virtualisation permet d’obtenir entre 30 et 55% de gains énergétiques pour des réseaux de différentes tailles

    Metronome: adaptive and precise intermittent packet retrieval in DPDK

    Full text link
    DPDK (Data Plane Development Kit) is arguably today's most employed framework for software packet processing. Its impressive performance however comes at the cost of precious CPU resources, dedicated to continuously poll the NICs. To face this issue, this paper presents Metronome, an approach devised to replace the continuous DPDK polling with a sleep&wake intermittent mode. Metronome revolves around two main innovations. First, we design a microseconds time-scale sleep function, named hr_sleep(), which outperforms Linux' nanosleep() of more than one order of magnitude in terms of precision when running threads with common time-sharing priorities. Then, we design, model, and assess an efficient multi-thread operation which guarantees service continuity and improved robustness against preemptive thread executions, like in common CPU-sharing scenarios, meanwhile providing controlled latency and high polling efficiency by dynamically adapting to the measured traffic load

    Energy Proportionality and Workload Consolidation for Latency-Critical Applications

    Get PDF
    Energy proportionality and workload consolidation are important objectives towards increasing efficiency in large-scale datacenters. Our work focuses on achieving these goals in the presence of applications with microsecond-scale tail latency requirements. Such applications represent a growing subset of datacenter workloads and are typically deployed on dedicated servers, which is the simplest way to ensure low tail latency across all loads. Unfortunately, it also leads to low energy efficiency and low resource utilization during the frequent periods of medium or low load. We present the OS mechanisms and dynamic control needed to adjust core allocation and voltage/frequency settings based on the measured delays for latency-critical workloads. This allows for energy proportionality and frees the maximum amount of resources per server for other background applications, while respecting service-level objectives. The two key mechanism allow us to detect increases in queuing latencies and to re-assign flow groups between the threads of a latency-critical application in milliseconds without dropping or reordering packets. We compare the efficiency of our solution to the Pareto-optimal frontier of 224 distinct static configurations. Dynamic resource control saves 44%–54% of processor energy, which corresponds to 85%–93% of the Pareto-optimal upper bound. Dynamic resource control also allows background jobs to run at 32%–46% of their standalone throughput, which corresponds to 82%–92% of the Pareto bound

    Robust Energy-aware Routing with Redundancy Elimination

    Get PDF
    International audienceMany studies in literature have shown that energy-aware routing (EAR) can significantly reduce energy consumption for backbone networks. Also, as an arising concern in networking research area, the protocol-independent traffic redundancy elimination (RE) technique helps to reduce (a.k.a compress) traffic load on backbone network. Motivation from a formulation perspective, we first present an extended model of the classical multi-commodity flow problem with compressible flows. Moreover, our model is robust with fluctuation of traffic demand and compression rate. In details, we allow any set of a predefined size of traffic flows to deviate simultaneously from their nominal volumes or compression rates. As an applicable example, we use this model to combine redundancy elimination and energy-aware routing to increase energy efficiency for a backbone network. Using this extra knowledge on the dynamics of the traffic pattern, we are able to significantly increase energy efficiency for the network. We formally define the problem and model it as a Mixed Integer Linear Program (MILP). We then propose an efficient heuristic algorithm that is suitable for large networks. Simulation results with real traffic traces on Abilene, Geant and Germany50 networks show that our approach allows for 16-28% extra energy saving with respect to the classical EAR model

    Energy-Aware Routing in Software-Defined Network using Compression

    Get PDF
    International audienceSoftware-defined Networks (SDN) is a new networking paradigm enabling innovation through network programmability. Over past few years, many applications have been built using SDN such as server load balancing, virtual-machine migration, trac engineering and access control. In this paper, we focus on using SDN for energy-aware routing (EAR). Since trac load has a small influence on the power consumption of routers, EAR allows putting unused links into sleep mode to save energy. SDN can collect trac matrix and then computes routing solutions satisfying QoS while being minimal in energy consumption. However, prior works on EAR have assumed that the SDN forwarding table switch can hold an infinite number of rules. In practice, this assumption does not hold since such flow tables are implemented in Ternary Content Addressable Memory (TCAM) which is expensive and power-hungry. We consider the use of wildcard rules to compress the forwarding tables. In this paper, we propose optimization methods to minimize energy consumption for a backbone network while respecting capacity constraints on links and rule space constraints on routers. In details, we present two exact formulations using Integer Linear Program (ILP) and introduce ecient heuristic algorithms. Based on simulations on realistic network topologies, we show that using this smart rule space allocation, it is possible to save almost as much power consumption as the classical EAR approach

    Bridging the gap between dataplanes and commodity operating systems

    Get PDF
    The conventional wisdom is that aggressive networking requirements, such as high packet rates for small messages and microsecond-scale tail latency, are best addressed outside the kernel, in a user-level networking stack. In particular, dataplanes borrow design elements from network middleboxes to run tasks to completion in tight loops. In its basic form, the dataplane design leverages sweeping simplifications such as the elimination of any resource management and any task scheduling to improve throughput and lower latency. As a result, dataplanes perform best when the request rate is predictable (since there is no resource management) and the service time of each task has a low execution time and a low dispersion. On the other hand, they exhibit poor energy proportionality and workload consolidation, and suffer from head-of-line blocking. This thesis proposes the introduction of resource management to dataplanes. Current dataplanes decrease latency by constantly polling for incoming network packets. This approach trades energy usage for latency. We argue that it is possible to introduce a control plane, which manages the resources in the most optimal way in terms of power usage without affecting the performance of the dataplane. Additionally, this thesis proposes the introduction of scheduling to dataplanes. Current designs operate in a strict FIFO and run-to-completion manner. This method is effective only when the incoming request requires a minimal amount of processing in the order of a few microseconds. When the processing time of requests is (a) longer or (b) follows a distribution with higher dispersion, the transient load imbalances and head-of-line blocking deteriorate the performance of the dataplane. We claim that it is possible to introduce a scheduler to dataplanes, which routes requests to the appropriate core and effectively reduce the tail latency of the system while at the same time support a wider range of workloads
    corecore