199 research outputs found

    3E: Energy-Efficient Elastic Scheduling for Independent Tasks in Heterogeneous Computing Systems

    Get PDF
    Reducing energy consumption is a major design constraint for modern heterogeneous computing systems to minimize electricity cost, improve system reliability and protect environment. Conventional energy-efficient scheduling strategies developed on these systems do not sufficiently exploit the system elasticity and adaptability for maximum energy savings, and do not simultaneously take account of user expected finish time. In this paper, we develop a novel scheduling strategy named energy-efficient elastic (3E) scheduling for aperiodic, independent and non-real-time tasks with user expected finish times on DVFS-enabled heterogeneous computing systems. The 3E strategy adjusts processors’ supply voltages and frequencies according to the system workload, and makes trade-offs between energy consumption and user expected finish times. Compared with other energy-efficient strategies, 3E significantly improves the scheduling quality and effectively enhances the system elasticity

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201

    An energy optimization with improved QOS approach for adaptive cloud resources

    Get PDF
    In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (ACRR) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model ACRR in terms of average run time, power consumption and average power required than any other state-of-art techniques

    Qos-aware fine-grained power management in networked computing systems

    Get PDF
    Power is a major design concern of today\u27s networked computing systems, from low-power battery-powered mobile and embedded systems to high-power enterprise servers. Embedded systems are required to be power efficiency because most embedded systems are powered by battery with limited capacity. Similar concern of power expenditure rises as well in enterprise server environments due to cooling requirement, power delivery limit, electricity costs as well as environment pollutions. The power consumption in networked computing systems includes that on circuit board and that for communication. In the context of networked real-time systems, the power dissipation on wireless communication is more significant than that on circuit board. We focus on packet scheduling for wireless real-time systems with renewable energy resources. In such a scenario, it is required to transmit data with higher level of importance periodically. We formulate this packet scheduling problem as an NP-hard reward maximization problem with time and energy constraints. An optimal solution with pseudo polynomial time complexity is presented. In addition, we propose a sub-optimal solution with polynomial time complexity. Circuit board, especially processor, power consumption is still the major source of system power consumption. We provide a general-purposed, practical and comprehensive power management middleware for networked computing systems to manage circuit board power consumption thus to affect system-level power consumption. It has the functionalities of power and performance monitoring, power management (PM) policy selection and PM control, as well as energy efficiency analysis. This middleware includes an extensible PM policy library. We implemented a prototype of this middleware on Base Band Units (BBUs) with three PM policies enclosed. These policies have been validated on different platforms, such as enterprise servers, virtual environments and BBUs. In enterprise environments, the power dissipation on circuit board dominates. Regulation on computing resources on board has a significant impact on power consumption. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique to conserve energy consumption. We investigate system-level power management in order to avoid system failures due to power capacity overload or overheating. This management needs to control the power consumption in an accurate and responsive manner, which cannot be achieve by the existing black-box feedback control. Thus we present a model-predictive feedback controller to regulate processor frequency so that power budget can be satisfied without significant loss on performance. In addition to providing power guarantee alone, performance with respect to service-level agreements (SLAs) is required to be guaranteed as well. The proliferation of virtualization technology imposes new challenges on power management due to resource sharing. It is hard to achieve optimization in both power and performance on shared infrastructures due to system dynamics. We propose vPnP, a feedback control based coordination approach providing guarantee on application-level performance and underlying physical host power consumption in virtualized environments. This system can adapt gracefully to workload change. The preliminary results show its flexibility to achieve different levels of tradeoffs between power and performance as well as its robustness over a variety of workloads. It is desirable for improve energy efficiency of systems, such as BBUs, hosting soft-real time applications. We proposed a power management strategy for controlling delay and minimizing power consumption using DVFS. We use the Robbins-Monro (RM) stochastic approximation method to estimate delay quantile. We couple a fuzzy controller with the RM algorithm to scale CPU frequency that will maintain performance within the specified QoS

    A Approach to Optimal Strategy for Energy Efficiency in Cloud System

    Get PDF
    Cloud is a combination of datacentre software and hardware. People may be provider, user of SaaS, users or providers of Utility Computing. Most of the energy in system devices is squandered because they are built to deal with worst case scenario. Different scheduler like SJFGC, DENS, and DCEERS are reported by different researches. Green CloudSim makes total of energy utilization information in data centre. It is utilized by communication and computing components of the data centre possible on an unprecedented fashion. In the paper comparision of total energy consumed by two scheduling viz. Random and RandomDENS algorithms is presented

    Energy-aware simulation with DVFS

    Get PDF
    International audienceIn recent years, research has been conducted in the area of large systems models, especially distributed systems, to analyze and understand their behavior. Simulators are now commonly used in this area and are becoming more complex. Most of them provide frameworks for simulating application scheduling in various Grid infrastructures, others are specifically developed for modeling networks, but only a few of them simulate energy-efficient algorithms. This article describes which tools need to be implemented in a simulator in order to support energy-aware experimentation. The emphasis is on DVFS simulation, from its implementation in the simulator CloudSim to the whole methodology adopted to validate its functioning. In addition, a scientific application is used as a use case in both experiments and simulations, where the close relationship between DVFS efficiency and hardware architecture is highlighted. A second use case using Cloud applications represented by DAGs, which is also a new functionality of CloudSim, demonstrates that the DVFS efficiency also depends on the intrinsic middleware behavior

    On energy consumption of switch-centric data center networks

    Get PDF
    Data center network (DCN) is the core of cloud computing and accounts for 40% energy spend when compared to cooling system, power distribution and conversion of the whole data center (DC) facility. It is essential to reduce the energy consumption of DCN to esnure energy-efficient (green) data center can be achieved. An analysis of DC performance and efficiency emphasizing the effect of bandwidth provisioning and throughput on energy proportionality of two most common switch-centric DCN topologies: three-tier (3T) and fat tree (FT) based on the amount of actual energy that is turned into computing power are presented. Energy consumption of switch-centric DCNs by realistic simulations is analyzed using GreenCloud simulator. Power related metrics were derived and adapted for the information technology equipment (ITE) processes within the DCN. These metrics are acknowledged as subset of the major metrics of power usage effectiveness (PUE) and data center infrastructure efficiency (DCIE), known to DCs. This study suggests that despite in overall FT consumes more energy, it spends less energy for transmission of a single bit of information, outperforming 3T

    Enforcing CPU allocation in a heterogeneous IaaS

    Get PDF
    International audienceIn an Infrastructure as a Service (IaaS), the amount of resources allocated to a virtual machine (VM) at creation time may be expressed with relative values (relative to the hardware, i.e., a fraction of the capacity of a device) or absolute values (i.e., a performance metric which is independent from the capacity of the hardware). Surprisingly, disk or network resource allocations are expressed with absolute values (bandwidth), but CPU resource allocations are expressed with relative values (a percentage of a processor). The major problem with CPU relative value allocations is that it depends on the capacity of the CPU, which may vary due to different factors (server heterogeneity in a cluster, Dynamic Voltage Frequency Scaling (DVFS)). In this paper, we analyze the side effects and drawbacks of relative allocations. We claim that CPU allocation should be expressed with absolute values. We propose such a CPU resource management system and we demonstrate and evaluate its benefits
    • …
    corecore