145,966 research outputs found

    Energy Demand Response for High-Performance Computing Systems

    Get PDF
    The growing computational demand of scientific applications has greatly motivated the development of large-scale high-performance computing (HPC) systems in the past decade. To accommodate the increasing demand of applications, HPC systems have been going through dramatic architectural changes (e.g., introduction of many-core and multi-core systems, rapid growth of complex interconnection network for efficient communication between thousands of nodes), as well as significant increase in size (e.g., modern supercomputers consist of hundreds of thousands of nodes). With such changes in architecture and size, the energy consumption by these systems has increased significantly. With the advent of exascale supercomputers in the next few years, power consumption of the HPC systems will surely increase; some systems may even consume hundreds of megawatts of electricity. Demand response programs are designed to help the energy service providers to stabilize the power system by reducing the energy consumption of participating systems during the time periods of high demand power usage or temporary shortage in power supply. This dissertation focuses on developing energy-efficient demand-response models and algorithms to enable HPC system\u27s demand response participation. In the first part, we present interconnection network models for performance prediction of large-scale HPC applications. They are based on interconnected topologies widely used in HPC systems: dragonfly, torus, and fat-tree. Our interconnect models are fully integrated with an implementation of message-passing interface (MPI) that can mimic most of its functions with packet-level accuracy. Extensive experiments show that our integrated models provide good accuracy for predicting the network behavior, while at the same time allowing for good parallel scaling performance. In the second part, we present an energy-efficient demand-response model to reduce HPC systems\u27 energy consumption during demand response periods. We propose HPC job scheduling and resource provisioning schemes to enable HPC system\u27s emergency demand response participation. In the final part, we propose an economic demand-response model to allow both HPC operator and HPC users to jointly reduce HPC system\u27s energy cost. Our proposed model allows the participation of HPC systems in economic demand-response programs through a contract-based rewarding scheme that can incentivize HPC users to participate in demand response

    A Study Resource Optimization Techniques Based Job Scheduling in Cloud Computing

    Get PDF
    Cloud computing has revolutionized the way businesses and individuals utilize computing resources. It offers on-demand access to a vast pool of virtualized resources, such as processing power, storage, and networking, through the Internet. One of the key challenges in cloud computing is efficiently scheduling jobs to maximize resource utilization and minimize costs. Job scheduling in cloud computing involves allocating tasks or jobs to available resources in an optimal manner. The objective is to minimize job completion time, maximize resource utilization, and meet various performance metrics such as response time, throughput, and energy consumption. Resource optimization techniques play a crucial role in achieving these objectives. Resource optimization techniques aim to efficiently allocate resources to jobs, taking into account factors like resource availability, job priorities, and constraints. These techniques utilize various algorithms and optimization approaches to make intelligent decisions about resource allocation. Research on resource optimization techniques for job scheduling in cloud computing is of significant importance due to the following reasons: Efficient Resource Utilization: Cloud computing environments consist of a large number of resources that need to be utilized effectively to maximize cost savings and overall system performance. By optimizing job scheduling, researchers can develop algorithms and techniques that ensure efficient utilization of resources, leading to improved productivity and reduced costs. Performance Improvement: Job scheduling plays a crucial role in meeting performance metrics such as response time, throughput, and reliability. By designing intelligent scheduling algorithms, researchers can improve the overall system performance, leading to better user experience and customer satisfaction. Scalability: Cloud computing environments are highly scalable, allowing users to dynamically scale resources based on their needs. Effective job scheduling techniques enable efficient resource allocation and scaling, ensuring that the system can handle varying workloads without compromising performance. Energy Efficiency: Cloud data centres consume significant amounts of energy, and optimizing resource allocation can contribute to energy conservation. By scheduling jobs intelligently, researchers can reduce energy consumption, leading to environmental benefits and cost savings for cloud service providers. Quality of Service (QoS): Cloud computing service providers often have service-level agreements (SLAs) that define the QoS requirements expected by users. Resource optimization techniques for job scheduling can help meet these SLAs by ensuring that jobs are allocated resources in a timely manner, meeting performance guarantees, and maintaining high service availability. Here in this research, we have used the method of the weighted product model (WPM). For the topic of Resource Optimization Techniques Based Job Scheduling in Cloud Computing For calculating the values of alternative and evaluation parameters. A variation of the WSM called the weighted product method (WPM) has been proposed to address some of the weaknesses of The WSM that came before it. The main distinction is that the multiplication is being used in place of additional. The terms "scoring methods" are frequently used to describe WSM and WPM Execution time on Virtual machine, Transmission time (delay)on Virtual machine, Processing cost of a task on virtual machine resource optimization techniques based on job scheduling play a crucial role in maximizing the efficiency and performance of cloud computing systems. By effectively managing and allocating resources, these techniques help minimize costs, reduce energy consumption, and improve overall system throughput. One of the key findings is that intelligent job scheduling algorithms, such as genetic algorithms, ant colony optimization

    Software-Defined Cloud Computing: Architectural Elements and Open Challenges

    Full text link
    The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi, Indi

    On the feasibility of collaborative green data center ecosystems

    Get PDF
    The increasing awareness of the impact of the IT sector on the environment, together with economic factors, have fueled many research efforts to reduce the energy expenditure of data centers. Recent work proposes to achieve additional energy savings by exploiting, in concert with customers, service workloads and to reduce data centers’ carbon footprints by adopting demand-response mechanisms between data centers and their energy providers. In this paper, we debate about the incentives that customers and data centers can have to adopt such measures and propose a new service type and pricing scheme that is economically attractive and technically realizable. Simulation results based on real measurements confirm that our scheme can achieve additional energy savings while preserving service performance and the interests of data centers and customers.Peer ReviewedPostprint (author's final draft

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201
    • …
    corecore