13,452 research outputs found

    Joint Energy Efficient and QoS-aware Path Allocation and VNF Placement for Service Function Chaining

    Full text link
    Service Function Chaining (SFC) allows the forwarding of a traffic flow along a chain of Virtual Network Functions (VNFs, e.g., IDS, firewall, and NAT). Software Defined Networking (SDN) solutions can be used to support SFC reducing the management complexity and the operational costs. One of the most critical issues for the service and network providers is the reduction of energy consumption, which should be achieved without impact to the quality of services. In this paper, we propose a novel resource (re)allocation architecture which enables energy-aware SFC for SDN-based networks. To this end, we model the problems of VNF placement, allocation of VNFs to flows, and flow routing as optimization problems. Thereafter, heuristic algorithms are proposed for the different optimization problems, in order find near-optimal solutions in acceptable times. The performance of the proposed algorithms are numerically evaluated over a real-world topology and various network traffic patterns. The results confirm that the proposed heuristic algorithms provide near optimal solutions while their execution time is applicable for real-life networks.Comment: Extended version of submitted paper - v7 - July 201

    Virtual Machines Embedding for Cloud PON AWGR and Server Based Data Centres

    Full text link
    In this study, we investigate the embedding of various cloud applications in PON AWGR and Server Based Data Centres

    Migration energy aware reconfigurations of virtual network function instances in NFV architectures

    Get PDF
    Network function virtualization (NFV) is a new network architecture framework that implements network functions in software running on a pool of shared commodity servers. NFV can provide the infrastructure flexibility and agility needed to successfully compete in today's evolving communications landscape. Any service is represented by a service function chain (SFC) that is a set of VNFs to be executed according to a given order. The running of VNFs needs the instantiation of VNF instances (VNFIs) that are software modules executed on virtual machines. This paper deals with the migration problem of the VNFIs needed in the low traffic periods to turn OFF servers and consequently to save energy consumption. Though the consolidation allows for energy saving, it has also negative effects as the quality of service degradation or the energy consumption needed for moving the memories associated to the VNFI to be migrated. We focus on cold migration in which virtual machines are redundant and suspended before performing migration. We propose a migration policy that determines when and where to migrate VNFI in response to changes to SFC request intensity. The objective is to minimize the total energy consumption given by the sum of the consolidation and migration energies. We formulate the energy aware VNFI migration problem and after proving that it is NP-hard, we propose a heuristic based on the Viterbi algorithm able to determine the migration policy with low computational complexity. The results obtained by the proposed heuristic show how the introduced policy allows for a reduction of the migration energy and consequently lower total energy consumption with respect to the traditional policies. The energy saving can be on the order of 40% with respect to a policy in which migration is not performed

    Online Algorithms for Geographical Load Balancing

    Get PDF
    It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective

    An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers

    Full text link
    Today's Cloud applications are dominated by composite applications comprising multiple computing and data components with strong communication correlations among them. Although Cloud providers are deploying large number of computing and storage devices to address the ever increasing demand for computing and storage resources, network resource demands are emerging as one of the key areas of performance bottleneck. This paper addresses network-aware placement of virtual components (computing and data) of multi-tier applications in data centers and formally defines the placement as an optimization problem. The simultaneous placement of Virtual Machines and data blocks aims at reducing the network overhead of the data center network infrastructure. A greedy heuristic is proposed for the on-demand application components placement that localizes network traffic in the data center interconnect. Such optimization helps reducing communication overhead in upper layer network switches that will eventually reduce the overall traffic volume across the data center. This, in turn, will help reducing packet transmission delay, increasing network performance, and minimizing the energy consumption of network components. Experimental results demonstrate performance superiority of the proposed algorithm over other approaches where it outperforms the state-of-the-art network-aware application placement algorithm across all performance metrics by reducing the average network cost up to 67% and network usage at core switches up to 84%, as well as increasing the average number of application deployments up to 18%.Comment: Submitted for publication consideration for the Journal of Network and Computer Applications (JNCA). Total page: 28. Number of figures: 15 figure

    Power Management Techniques for Data Centers: A Survey

    Full text link
    With growing use of internet and exponential growth in amount of data to be stored and processed (known as 'big data'), the size of data centers has greatly increased. This, however, has resulted in significant increase in the power consumption of the data centers. For this reason, managing power consumption of data centers has become essential. In this paper, we highlight the need of achieving energy efficiency in data centers and survey several recent architectural techniques designed for power management of data centers. We also present a classification of these techniques based on their characteristics. This paper aims to provide insights into the techniques for improving energy efficiency of data centers and encourage the designers to invent novel solutions for managing the large power dissipation of data centers.Comment: Keywords: Data Centers, Power Management, Low-power Design, Energy Efficiency, Green Computing, DVFS, Server Consolidatio
    • …
    corecore