349 research outputs found

    Energy Efficient Cloud Networks

    Get PDF
    Cloud computing is expected to be a major factor that will dominate the future Internet service model. This paper summarizes our work on energy efficiency for cloud networks. We develop a framework for studying the energy efficiency of four cloud services in IP over WDM networks: cloud content delivery, storage as a service (StaaS), and virtual machines (VMS) placement for processing applications and infrastructure as a service (IaaS).Our approach is based on the co-optimization of both external network related factors such as whether to geographically centralize or distribute the clouds, the influence of users’ demand distribution, content popularity, access frequency and renewable energy availability and internal capability factors such as the number of servers, switches and routers as well as the amount of storage demanded in each cloud. Our investigation of the different energy efficient approaches is backed with Mixed Integer Linear Programming (MILP) models and real time heuristic

    Towards Elastic Virtual Machine Placement in Overbooked OpenStack Clouds under Uncertainty

    Get PDF
    Cloud computing datacenters currently provide millions of virtual machines in highly dynamic Infrastructure as a Service (IaaS) markets. As a first step on implementing algorithms previously proposed by the authors for Virtual Machine Placement (VMP) in a real- world IaaS middleware, this work presents an experimental comparison of these algorithms against current algorithms considered for solving VMP problems in OpenStack. Several experiments considering scenario- based simulations for uncertainty modelling demonstrate that the proposed algorithms present promising results for its implementation towards real-world operations. Next research steps are also summarized.Facultad de Informátic

    Towards Elastic Virtual Machine Placement in Overbooked OpenStack Clouds under Uncertainty

    Get PDF
    Cloud computing datacenters currently provide millions of virtual machines in highly dynamic Infrastructure as a Service (IaaS) markets. As a first step on implementing algorithms previously proposed by the authors for Virtual Machine Placement (VMP) in a real- world IaaS middleware, this work presents an experimental comparison of these algorithms against current algorithms considered for solving VMP problems in OpenStack. Several experiments considering scenario- based simulations for uncertainty modelling demonstrate that the proposed algorithms present promising results for its implementation towards real-world operations. Next research steps are also summarized.Facultad de Informátic

    Towards Elastic Virtual Machine Placement in Overbooked OpenStack Clouds under Uncertainty

    Get PDF
    Cloud computing datacenters currently provide millions of virtual machines in highly dynamic Infrastructure as a Service (IaaS) markets. As a first step on implementing algorithms previously proposed by the authors for Virtual Machine Placement (VMP) in a real- world IaaS middleware, this work presents an experimental comparison of these algorithms against current algorithms considered for solving VMP problems in OpenStack. Several experiments considering scenario- based simulations for uncertainty modelling demonstrate that the proposed algorithms present promising results for its implementation towards real-world operations. Next research steps are also summarized.Facultad de Informátic

    Two-Phase Virtual Machine Placement Algorithms for Cloud Computing: An Experimental Evaluation under Uncertainty

    Get PDF
    Cloud computing providers must support requests for resources in dynamic environments, considering service elasticity and overbooking of physical resources. Due to the randomness of requests, Virtual Machine Placement (VMP) problems should be formulated under uncertainty. In this context, a renewed formulation of the VMP problem is presented, considering the optimization of four objective functions: (i) power consumption, (ii) economical revenue, (iii) resource utilization and (iv) reconfiguration time. To solve the presented formulation, a two-phase optimization scheme is considered, composed by an online incremental VMP phase (iVMP) and an offline VMP reconfiguration (VMPr) phase. An experimental evaluation of five algorithms taking into account 400 different scenarios was performed, considering three VMPr Triggering and two VMPr Recovering methods as well as three VMPr resolution alternatives. Experimental results indicate which algorithm outperformed the other evaluated algorithms, improving the quality of solutions in a scenario-based uncertainty model considering the following evaluation criteria: (i) average, (ii) maximum and (iii) minimum objective function costs.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    epcAware: a game-based, energy, performance and cost efficient resource management technique for multi-access edge computing

    Get PDF
    The Internet of Things (IoT) is producing an extraordinary volume of data daily, and it is possible that the data may become useless while on its way to the cloud for analysis, due to longer distances and delays. Fog/edge computing is a new model for analyzing and acting on time-sensitive data (real-time applications) at the network edge, adjacent to where it is produced. The model sends only selected data to the cloud for analysis and long-term storage. Furthermore, cloud services provided by large companies such as Google, can also be localized to minimize the response time and increase service agility. This could be accomplished through deploying small-scale datacenters (reffered to by name as cloudlets) where essential, closer to customers (IoT devices) and connected to a centrealised cloud through networks - which form a multi-access edge cloud (MEC). The MEC setup involves three different parties, i.e. service providers (IaaS), application providers (SaaS), network providers (NaaS); which might have different goals, therefore, making resource management a defficult job. In the literature, various resource management techniques have been suggested in the context of what kind of services should they host and how the available resources should be allocated to customers’ applications, particularly, if mobility is involved. However, the existing literature considers the resource management problem with respect to a single party. In this paper, we assume resource management with respect to all three parties i.e. IaaS, SaaS, NaaS; and suggest a game theoritic resource management technique that minimises infrastructure energy consumption and costs while ensuring applications performance. Our empirical evaluation, using real workload traces from Google’s cluster, suggests that our approach could reduce up to 11.95% energy consumption, and approximately 17.86% user costs with negligible loss in performance. Moreover, IaaS can reduce up to 20.27% energy bills and NaaS can increase their costs savings up to 18.52% as compared to other methods
    corecore