17 research outputs found

    Designing Parametric Constraint Based Power Aware Scheduling System in a Virtualized Cloud Environment

    Get PDF
    The increasing rate of the demand for computational resources has led to the production of largescale data centers. They consume huge amounts of electrical power resulting in high operational costs and carbon dioxide emissions. Power-related costs have become one of the major economic factors in IT data-centers, and companies and the research community are currently working on new efficient power aware resource management strategies, also known as 201C;Green IT201D;. Here we propose a framework for autonomic scheduling of tasks based upon some parametric constraints. In this paper we propose an analysis of the critical factors affecting the energy consumption of cloud servers in cloud computing and consideration to make performance very fast by using Sigar API to solve speed problems. In PCBPAS we impose some parametric constraints during task allocation to the server that can be adjusted dynamically to balance the server2019;s workloads in an efficient way so that CPU consumption can be improved and energy saving be achieved

    Resource Management Policies for Cloud-based Interactive 3D Applications

    Get PDF
    The increasing interest for the cloud computing paradigm is leading several different applications and services moving to the 'cloud'. Those range from general storage and computing services to document management systems and office applications. A new challenge is the migration to the cloud of interactive 3D applications, especially those designed for professional usage (e.g., scientific data visualizers, CAD instruments, 3D medical modeling applications). Among the several hurdles rising from some specific hardware and software requirements, an important issue to address is the definition of novel management policies that can properly support these applications, namely, that ensure efficient resource utilization together with a sufficient quality perceived by users. This paper presents some preliminary results in this direction and discusses some possible future work in this field. Our work is part of a wider project aiming at developing a complete architecture to offer interactive 3D applications in a cloud computing environment. Hence, we refer to this particular solution in this stud

    Developing resource consolidation frameworks for moldable virtual machines in clouds

    Get PDF
    This paper considers the scenario where multiple clusters of Virtual Machines (i.e., termed Virtual Clusters) are hosted in a Cloud system consisting of a cluster of physical nodes. Multiple Virtual Clusters (VCs) cohabit in the physical cluster, with each VC offering a particular type of service for the incoming requests. In this context, VM consolidation, which strives to use a minimal number of nodes to accommodate all VMs in the system, plays an important role in saving resource consumption. Most existing consolidation methods proposed in the literature regard VMs as “rigid” during consolidation, i.e., VMs’ resource capacities remain unchanged. In VC environments, QoS is usually delivered by a VC as a single entity. Therefore, there is no reason why VMs’ resource capacity cannot be adjusted as long as the whole VC is still able to maintain the desired QoS. Treating VMs as “moldable” during consolidation may be able to further consolidate VMs into an even fewer number of nodes. This paper investigates this issue and develops a Genetic Algorithm (GA) to consolidate moldable VMs. The GA is able to evolve an optimized system state, which represents the VM-to-node mapping and the resource capacity allocated to each VM. After the new system state is calculated by the GA, the Cloud will transit from the current system state to the new one. The transition time represents overhead and should be minimized. In this paper, a cost model is formalized to capture the transition overhead, and a reconfiguration algorithm is developed to transit the Cloud to the optimized system state with low transition overhead. Experiments have been conducted to evaluate the performance of the GA and the reconfiguration algorithm

    Deadline constrained prediction of job resource requirements to manage high-level SLAs for SaaS cloud providers

    Get PDF
    For a non IT expert to use services in the Cloud is more natural to negotiate the QoS with the provider in terms of service-level metrics –e.g. job deadlines– instead of resourcelevel metrics –e.g. CPU MHz. However, current infrastructures only support resource-level metrics –e.g. CPU share and memory allocation– and there is not a well-known mechanism to translate from service-level metrics to resource-level metrics. Moreover, the lack of precise information regarding the requirements of the services leads to an inefficient resource allocation –usually, providers allocate whole resources to prevent SLA violations. According to this, we propose a novel mechanism to overcome this translation problem using an online prediction system which includes a fast analytical predictor and an adaptive machine learning based predictor. We also show how a deadline scheduler could use these predictions to help providers to make the most of their resources. Our evaluation shows: i) that fast algorithms are able to make predictions with an 11% and 17% of relative error for the CPU and memory respectively; ii) the potential of using accurate predictions in the scheduling compared to simple yet well-known schedulers.Preprin

    Energyefficient virtual machine live migration in cloud data centers

    Get PDF
    Abstract Cloud computing services will play an important role to meet various requirements of the clients in daily lives. In cloud computing, virtualization is an important issue to minimize cost incurred to manage data centers across the world. The energy consumption has become the reason for higher cost in operating data centers. Savings can be achieved by continuous consolidation with live migration of VMs depending upon the utilization of the resources, virtual network topologies and thermal state of computing nodes. This paper presents a review of research work done by researchers based on energy-aware Virtual Machine live migration from one host to another in cloud data centers and highlighting its key concepts with research challenges. Keywords Virtual Machines (VMs), Live Migration, Energy Overhead, Data Center I. Introduction Cloud computing is gaining importance day-by-day. The large number enterprises and individuals are shifting opting for cloud computing services. Thousands of servers have been employed worldwide to cater to the needs of customers for computing services by big organisations like Amazon, Microsoft, IBM and Google. The round-the-clock reliable computational services, fault tolerance and information security are the main issues to be addressed while providing services to geographically spread customers site

    Functional model of a software system with random time horizon

    Get PDF
    Virtualization technologies are being actively used to design infrastructure of cloud computing systems. In this case applications can be duplicated and hosted in different virtual machines on different physical nodes. That defines various performance of applications which causes the problem of managing performance of the entire heterogeneous system. There are different ways of solving this problem, including queuing theory methods. However research of the threshold discipline in scope of queuing theory is not complete because of difficulty of gathering precise analytic values and building of precise mathematic model of the system. Another feature of heterogeneous systems is the finite random time of system functioning which is defined by random endogenous and exogenous factors. This paper gives an overview on a functional model of the system with two heterogeneous devices with random functioning time and different service disciplines. In scope of simulation statistic experiments for different service disciplines at random time interval an average time needed to process a single request is measured. A comparison of service disciplines is conducted. Authors also provide a working software implementation of the heterogeneous system and experiments with use of service disciplines is performed

    Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Get PDF
    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated

    On energy consumption of switch-centric data center networks

    Get PDF
    Data center network (DCN) is the core of cloud computing and accounts for 40% energy spend when compared to cooling system, power distribution and conversion of the whole data center (DC) facility. It is essential to reduce the energy consumption of DCN to esnure energy-efficient (green) data center can be achieved. An analysis of DC performance and efficiency emphasizing the effect of bandwidth provisioning and throughput on energy proportionality of two most common switch-centric DCN topologies: three-tier (3T) and fat tree (FT) based on the amount of actual energy that is turned into computing power are presented. Energy consumption of switch-centric DCNs by realistic simulations is analyzed using GreenCloud simulator. Power related metrics were derived and adapted for the information technology equipment (ITE) processes within the DCN. These metrics are acknowledged as subset of the major metrics of power usage effectiveness (PUE) and data center infrastructure efficiency (DCIE), known to DCs. This study suggests that despite in overall FT consumes more energy, it spends less energy for transmission of a single bit of information, outperforming 3T
    corecore