38 research outputs found

    A Review on Various Energy Efficient Techniques in Cloud Environment

    Get PDF
    Cloud computing is web based mostly development and use of engineering. it is a mode of computing within which dynamically scalable and sometimes virtualized resources are provided as a service over the web. Users needn't have data of, experience in, or management over the technology infrastructure "in the cloud" that supports them. programming is one of the core steps to with efficiency exploit the capabilities of heterogeneous computing systems. On cloud computing platform, load equalisation of the whole system will be dynamically handled by using virtualization technology through that it becomes potential to remap virtual machine and physical resources in step with the modification in load. However, so as to boost performance, the virtual machines ought to totally utilize its resources and services by adapting to computing setting dynamically. The load balancing with correct allocation of resources should be bonded so as to boost resource utility and energy efficiency

    Алгоритмы, программная архитектура и информационная технология моделирования методов масштабирования скоростей параллельных процессоров вычислительного кластера

    Get PDF
    Предложены алгоритмы масштабирования скоростей параллельных процессоров многопроцессорных систем и вычислительных кластеров, имеющих гомогенную и гетерогенную архитектуру. Разработаны программная архитектура и информационная технология для моделирования работы алгоритмов, использующих различные методы распределения заданий на процессоры, приведены примеры и результаты их экспериментального исследования, подтверждающие их эффективность для построения энергоэффективных расписаний выполнения заданий с директивными сроками выполнения в многопроцессорных системах и вычислительных кластерах.There have been proposed the algorithms for speed scaling on parallel processors for multiprocessor systems and computing clusters with homogeneous and heterogeneous architectures. Software architecture and information technology for simulation of algorithms using a variety of methods for allocating tasks to processors are developed. There are examples and the results of their experimental studies proving its effectiveness for constructing energy efficient scheduling task execution with deadlines in multiprocessor systems and computing clusters

    Minimizing Energy Consumption by Task Consolidation in Cloud Centers with Optimized Resource Utilization

    Get PDF
    Cloud computing is an emerging field of computation. As the data centers consume large amount of power, it increases the system overheads as well as the carbon dioxide emission increases drastically. The main aim is to maximize the resource utilization by minimizing the power consumption. However, the greatest usages of resources does not mean that there has been a right use of energy.  Various resources which are idle, also consumes a significant amount of energy. So we have to keep minimum resources idle. Current studies have shown that the power consumption due to unused computing resources is nearly 1 to 20%. So, the unused resources have been assigned with some of the tasks to utilize the unused period. In the present paper, it has been suggested that the energy saving with task consolidation which has been saved the energy by minimizing the number of idle resources in a cloud computing environment. It has been achieved far-reaching experiments to quantify the performance of the proposed algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201

    Energy-Aware Server Provisioning by Introducing Middleware-Level Dynamic Green Scheduling

    Get PDF
    International audienceSeveral approaches to reduce the power consumption of datacenters have been described in the literature, most of which aim to improve energy efficiency by trading off performance for reducing power consumption. However, these approaches do not always provide means for administrators and users to specify how they want to explore such trade-offs. This work provides techniques for assigning jobs to distributed resources, exploring energy efficient resource provisioning. We use middleware-level mechanisms to adapt resource allocation according to energy-related events and user-defined rules. A proposed framework enables developers, users and system administrators to specify and explore energy efficiency and performance trade-offs without detailed knowledge of the underlying hardware platform. Evaluation of the proposed solution under three scheduling policies shows gains of 25% in energy-efficiency with minimal impact on the overall application performance. We also evaluate reactivity in the adaptive resource provisioning

    BSLD threshold driven parallel job scheduling for energy efficient HPC centers

    Get PDF
    Recently, power awareness in high performance computing (HPC) community has increased significantly. While CPU power reduction of HPC applications using Dynamic Voltage Frequency Scaling (DVFS) has been explored thoroughly, CPU power management for large scale parallel systems at system level has left unexplored. In this paper we propose a power-aware parallel job scheduler assuming DVFS enabled clusters. Traditional parallel job schedulers determine when a job will be run, power aware ones should assign CPU frequency which it will be run at. We have introduced two adjustable thresholds in order to enable fine grain energy performance trade-off control. Since our power reduction approach is policy independent it can be added to any parallel job scheduling policy. Furthermore, we have done an analysis of HPC system dimension. Running an application at lower frequency on more processors can be more energy efficient than running it at the highest CPU frequency on less processors. This paper investigates whether having more DVFS enabled processors and same load can lead to better energy efficiency and performance. Five workload logs from systems in production use with up to 9 216 processors are simulated to evaluate the proposed algorithm and the dimensioning problem. Our approach decreases CPU energy by 7%- 18% on average depending on allowed job performance penalty. Applying the same frequency scaling algorithm on 20% larger system, CPU energy needed to execute same load can be decreased by almost 30% while having same or better job performance.Postprint (published version

    A Cloud-Computing-Based Data Placement Strategy in High-Speed Railway

    Get PDF
    As an important component of China’s transportation data sharing system, high-speed railway data sharing is a typical application of data-intensive computing. Currently, most high-speed railway data is shared in cloud computing environment. Thus, there is an urgent need for an effective cloud-computing-based data placement strategy in high-speed railway. In this paper, a new data placement strategy named hierarchical structure data placement strategy is proposed. The proposed method combines the semidefinite programming algorithm with the dynamic interval mapping algorithm. The semi-definite programming algorithm is suitable for the placement of files with various replications, ensuring that different replications of a file are placed on different storage devices, while the dynamic interval mapping algorithm ensures better self-adaptability of the data storage system. A hierarchical data placement strategy is proposed for large-scale networks. In this paper, a new theoretical analysis is provided, which is put in comparison with several other previous data placement approaches, showing the efficacy of the new analysis in several experiments

    Cloud Computing and Dependency: An ERA of Computing

    Get PDF
    Cloud Computing offers an entirely new way of looking at IT infrastructure. Cloud Computing system fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and programming system. Cloud Computing eliminates an up-front commitment by users, thereby allowing agencies to start small and increases hardware resources only when there is an increase in their needs. Moreover,cloud computing provides the ability to pay for use of computing resources in a short term basis as needed and release them as needed. In this paper we focus on architecture, types of cloud services, characteristics, advantages & disadvantages and security of cloud computing
    corecore