31,743 research outputs found

    Adaptive Power and Resource Management Techniques for Multithreaded Workloads

    Get PDF
    Abstract-As today's computing trends are moving towards the cloud, meeting the increasing computational demand while minimizing the energy costs in data centers has become essential. This work introduces two adaptive techniques to reduce the energy consumption of the computing clusters through power and resource management on multi-core processors. We first present a novel power capping technique to constrain the power consumption of computing nodes. Our technique combines Dynamic Voltage-Frequency Scaling (DVFS) and thread allocation on multi-core systems. By utilizing machine learning techniques, our power capping method is able to meet the power budgets 82% of the time without requiring any power measurement device and reduces the energy consumption by 51.6% on average in comparison to the state-of-the-art techniques. We then introduce an autonomous resource management technique for consolidated multi-threaded workloads running on multi-core servers. Our technique first classifies applications according to their energy efficiency measure, then proportionally allocates resources for co-scheduled applications to improve the energy efficiency. The proposed technique improves the energy efficiency by 17% in comparison to state-of-the-art co-scheduling policies. I. INTRODUCTION Energy-related costs are among the major contributors to the total cost of ownership of today's data centers and high performance computing (HPC) clusters. Therefore, future computing clusters are required to be energy-efficient in order to be able to meet the continuously increasing computational demand. Moreover, administration and management of the data center resources has become significantly complex, due to increasing number of servers installed on data centers. Therefore, designing autonomous techniques to optimally manage the limited data center resources is essential to achieve sustainability in the cloud era. The achievable maximum performance of a computing cluster is determined by (1) infrastructural/cost limitations (e.g, power delivery, cooling capacity, electricity cost) and/or (2) available hardware resources (e.g., CPU, disk size). Optimizing the performance under such constraints (i,e., power, resource) is critically important to improve the energy efficiency, therefore to reduce to cost of computing. Moreover, the emergence of multi-threaded applications on cloud resources bring additional challenges for optimizing the performanceenergy tradeoffs under resource constraints, due to their complex characteristics such as performance scalability and intercore communication. In this work, we present two adaptive management techniques for multi-threaded workloads to improve the energ

    Location Based Power Reduction Cloud Integrated Social Sensor Network

    Get PDF
    It is great to hear about the advancements in wireless sensor networks and their applications, as well as the integration of cloud computing to enhance data analysis and storage capabilities. Indeed, these technologies have opened up numerous possibilities across various fields, including infrastructure tracking, environmental monitoring, healthcare, and more. The concept of a social sensor cloud, as you mentioned, brings an interesting dimension to this technology landscape by focusing on knowledge-sharing and connecting like-minded individuals or organizations. This could potentially lead to more collaborative and efficient solutions across a wide range of domains. Energy efficiency is a critical consideration in the design and operation of wireless sensor networks and the cloud infrastructure that supports them. The limited battery life of sensors necessitates careful management of energy consumption to ensure optimal functionality and longevity. Sleep scheduling methods are a common technique used to manage energy consumption in these networks. By coordinating when sensors are active and when they are in a low-power sleep mode, energy consumption can be significantly reduced without compromising the network's overall effectiveness. In the context of the Social Sensor Cloud, managing energy efficiency becomes even more crucial due to the shorter battery life of the sensors involved. This is particularly relevant given the growing concerns about environmental sustainability and the need to reduce energy consumption across technological systems. It's clear that your research paper addresses these challenges head-on, by exploring energy-efficient techniques for the Social Sensor Cloud. Sleep scheduling is just one of the many strategies that researchers and engineers are working on to strike a balance between functionality and energy consumption. Other methods might include optimizing data transfer protocols, developing energy-harvesting mechanisms, and enhancing sensor hardware efficiency. As technology continues to evolve, the integration of wireless sensor networks, cloud computing, and social networks will likely pave the way for innovative solutions and transformative applications. Addressing energy efficiency concerns will undoubtedly play a crucial role in ensuring the long-term viability and positive impact of these technologies

    A Study Resource Optimization Techniques Based Job Scheduling in Cloud Computing

    Get PDF
    Cloud computing has revolutionized the way businesses and individuals utilize computing resources. It offers on-demand access to a vast pool of virtualized resources, such as processing power, storage, and networking, through the Internet. One of the key challenges in cloud computing is efficiently scheduling jobs to maximize resource utilization and minimize costs. Job scheduling in cloud computing involves allocating tasks or jobs to available resources in an optimal manner. The objective is to minimize job completion time, maximize resource utilization, and meet various performance metrics such as response time, throughput, and energy consumption. Resource optimization techniques play a crucial role in achieving these objectives. Resource optimization techniques aim to efficiently allocate resources to jobs, taking into account factors like resource availability, job priorities, and constraints. These techniques utilize various algorithms and optimization approaches to make intelligent decisions about resource allocation. Research on resource optimization techniques for job scheduling in cloud computing is of significant importance due to the following reasons: Efficient Resource Utilization: Cloud computing environments consist of a large number of resources that need to be utilized effectively to maximize cost savings and overall system performance. By optimizing job scheduling, researchers can develop algorithms and techniques that ensure efficient utilization of resources, leading to improved productivity and reduced costs. Performance Improvement: Job scheduling plays a crucial role in meeting performance metrics such as response time, throughput, and reliability. By designing intelligent scheduling algorithms, researchers can improve the overall system performance, leading to better user experience and customer satisfaction. Scalability: Cloud computing environments are highly scalable, allowing users to dynamically scale resources based on their needs. Effective job scheduling techniques enable efficient resource allocation and scaling, ensuring that the system can handle varying workloads without compromising performance. Energy Efficiency: Cloud data centres consume significant amounts of energy, and optimizing resource allocation can contribute to energy conservation. By scheduling jobs intelligently, researchers can reduce energy consumption, leading to environmental benefits and cost savings for cloud service providers. Quality of Service (QoS): Cloud computing service providers often have service-level agreements (SLAs) that define the QoS requirements expected by users. Resource optimization techniques for job scheduling can help meet these SLAs by ensuring that jobs are allocated resources in a timely manner, meeting performance guarantees, and maintaining high service availability. Here in this research, we have used the method of the weighted product model (WPM). For the topic of Resource Optimization Techniques Based Job Scheduling in Cloud Computing For calculating the values of alternative and evaluation parameters. A variation of the WSM called the weighted product method (WPM) has been proposed to address some of the weaknesses of The WSM that came before it. The main distinction is that the multiplication is being used in place of additional. The terms "scoring methods" are frequently used to describe WSM and WPM Execution time on Virtual machine, Transmission time (delay)on Virtual machine, Processing cost of a task on virtual machine resource optimization techniques based on job scheduling play a crucial role in maximizing the efficiency and performance of cloud computing systems. By effectively managing and allocating resources, these techniques help minimize costs, reduce energy consumption, and improve overall system throughput. One of the key findings is that intelligent job scheduling algorithms, such as genetic algorithms, ant colony optimization

    Green Approach for Joint Management of Geo-Distributed Data Centers and Interconnection Networks

    Get PDF
    Every time an Internet user downloads a video, shares a picture, or sends an email, his/her device addresses a data center and often several of them. These complex systems feed the web and all Internet applications with their computing power and information storage, but they are very energy hungry. The energy consumed by Information and Communication Technology (ICT) infrastructures is currently more than 4\% of the worldwide consumption and it is expected to double in the next few years. Data centers and communication networks are responsible for a large portion of the ICT energy consumption and this has stimulated in the last years a research effort to reduce or mitigate their environmental impact. Most of the approaches proposed tackle the problem by separately optimizing the power consumption of the servers in data centers and of the network. However, the Cloud computing infrastructure of most providers, which includes traditional telcos that are extending their offer, is rapidly evolving toward geographically distributed data centers strongly integrated with the network interconnecting them. Distributed data centers do not only bring services closer to users with better quality, but also provide opportunities to improve energy efficiency exploiting the variation of prices in different time zones, the locally generated green energy, and the storage systems that are becoming popular in energy networks. In this paper, we propose an energy aware joint management framework for geo-distributed data centers and their interconnection network. The model is based on virtual machine migration and formulated using mixed integer linear programming. It can be solved using state-of-the art solvers such as CPLEX in reasonable time. The proposed approach covers various aspects of Cloud computing systems. Alongside, it jointly manages the use of green and brown energies using energy storage technologies. The obtained results show that significant energy cost savings can be achieved compared to a baseline strategy, in which data centers do not collaborate to reduce energy and do not use the power coming from renewable resources

    Edge and Central Cloud Computing: A Perfect Pairing for High Energy Efficiency and Low-latency

    Get PDF
    In this paper, we study the coexistence and synergy between edge and central cloud computing in a heterogeneous cellular network (HetNet), which contains a multi-antenna macro base station (MBS), multiple multi-antenna small base stations (SBSs) and multiple single-antenna user equipment (UEs). The SBSs are empowered by edge clouds offering limited computing services for UEs, whereas the MBS provides high-performance central cloud computing services to UEs via a restricted multiple-input multiple-output (MIMO) backhaul to their associated SBSs. With processing latency constraints at the central and edge networks, we aim to minimize the system energy consumption used for task offloading and computation. The problem is formulated by jointly optimizing the cloud selection, the UEs' transmit powers, the SBSs' receive beamformers, and the SBSs' transmit covariance matrices, which is {a mixed-integer and non-convex optimization problem}. Based on methods such as decomposition approach and successive pseudoconvex approach, a tractable solution is proposed via an iterative algorithm. The simulation results show that our proposed solution can achieve great performance gain over conventional schemes using edge or central cloud alone. Also, with large-scale antennas at the MBS, the massive MIMO backhaul can significantly reduce the complexity of the proposed algorithm and obtain even better performance.Comment: Accepted in IEEE Transactions on Wireless Communication

    Integrated Green Cloud Computing Architecture

    Full text link
    Arbitrary usage of cloud computing, either private or public, can lead to uneconomical energy consumption in data processing, storage and communication. Hence, green cloud computing solutions aim not only to save energy but also reduce operational costs and carbon footprints on the environment. In this paper, an Integrated Green Cloud Architecture (IGCA) is proposed that comprises of a client-oriented Green Cloud Middleware to assist managers in better overseeing and configuring their overall access to cloud services in the greenest or most energy-efficient way. Decision making, whether to use local machine processing, private or public clouds, is smartly handled by the middleware using predefined system specifications such as service level agreement (SLA), Quality of service (QoS), equipment specifications and job description provided by IT department. Analytical model is used to show the feasibility to achieve efficient energy consumption while choosing between local, private and public Cloud service provider (CSP).Comment: 6 pages, International Conference on Advanced Computer Science Applications and Technologies, ACSAT 201

    Virtual Machines Embedding for Cloud PON AWGR and Server Based Data Centres

    Full text link
    In this study, we investigate the embedding of various cloud applications in PON AWGR and Server Based Data Centres
    • …
    corecore