3,033 research outputs found

    Energy-aware dynamic virtual machine consolidation for cloud datacenters

    Get PDF

    Migration energy aware reconfigurations of virtual network function instances in NFV architectures

    Get PDF
    Network function virtualization (NFV) is a new network architecture framework that implements network functions in software running on a pool of shared commodity servers. NFV can provide the infrastructure flexibility and agility needed to successfully compete in today's evolving communications landscape. Any service is represented by a service function chain (SFC) that is a set of VNFs to be executed according to a given order. The running of VNFs needs the instantiation of VNF instances (VNFIs) that are software modules executed on virtual machines. This paper deals with the migration problem of the VNFIs needed in the low traffic periods to turn OFF servers and consequently to save energy consumption. Though the consolidation allows for energy saving, it has also negative effects as the quality of service degradation or the energy consumption needed for moving the memories associated to the VNFI to be migrated. We focus on cold migration in which virtual machines are redundant and suspended before performing migration. We propose a migration policy that determines when and where to migrate VNFI in response to changes to SFC request intensity. The objective is to minimize the total energy consumption given by the sum of the consolidation and migration energies. We formulate the energy aware VNFI migration problem and after proving that it is NP-hard, we propose a heuristic based on the Viterbi algorithm able to determine the migration policy with low computational complexity. The results obtained by the proposed heuristic show how the introduced policy allows for a reduction of the migration energy and consequently lower total energy consumption with respect to the traditional policies. The energy saving can be on the order of 40% with respect to a policy in which migration is not performed

    Offline and online power aware resource allocation algorithms with migration and delay constraints

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In order to handle advanced mobile broadband services and Internet of Things (IoT), future Internet and 5G networks are expected to leverage the use of network virtualization, be much faster, have greater capacities, provide lower latencies, and significantly be power efficient than current mobile technologies. Therefore, this paper proposes three power aware algorithms for offline, online, and migration applications, solving the resource allocation problem within the frameworks of network function virtualization (NFV) environments in fractions of a second. The proposed algorithms target minimizing the total costs and power consumptions in the physical network through sufficiently allocating the least physical resources to host the demands of the virtual network services, and put into saving mode all other not utilized physical components. Simulations and evaluations of the offline algorithm compared to the state-of-art resulted on lower total costs by 32%. In addition to that, the online algorithm was tested through four different experiments, and the results argued that the overall power consumption of the physical network was highly dependent on the demands’ lifetimes, and the strictness of the required end-to-end delay. Regarding migrations during online, the results concluded that the proposed algorithms would be most effective when applied for maintenance and emergency conditions.Peer ReviewedPreprin

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
    • …
    corecore