7 research outputs found

    Energy Efficient Multiresource Allocation of Virtual Machine Based on PSO in Cloud Data Center

    Get PDF
    Presently, massive energy consumption in cloud data center tends to be an escalating threat to the environment. To reduce energy consumption in cloud data center, an energy efficient virtual machine allocation algorithm is proposed in this paper based on a proposed energy efficient multiresource allocation model and the particle swarm optimization (PSO) method. In this algorithm, the fitness function of PSO is defined as the total Euclidean distance to determine the optimal point between resource utilization and energy consumption. This algorithm can avoid falling into local optima which is common in traditional heuristic algorithms. Compared to traditional heuristic algorithms MBFD and MBFH, our algorithm shows significantly energy savings in cloud data center and also makes the utilization of system resources reasonable at the same time

    A Firefly Colony and Its Fuzzy Approach for Server Consolidation and Virtual Machine Placement in Cloud Datacenters

    Get PDF
    Managing cloud datacenters is the most prevailing challenging task ahead for the IT industries. The data centers are considered to be the main source for resource provisioning to the cloud users. Managing these resources to handle large number of virtual machine requests has created the need for heuristic optimization algorithms to provide the optimal placement strategies satisfying the objectives and constraints formulated. In this paper, we propose to apply firefly colony and fuzzy firefly colony optimization algorithms to solve two key issues of datacenters, namely, server consolidation and multiobjective virtual machine placement problem. The server consolidation aims to minimize the count of physical machines used and the virtual machine placement problem is to obtain optimal placement strategy with both minimum power consumption and resource wastage. The proposed techniques exhibit better performance than the heuristics and metaheuristic approaches considered in terms of server consolidation and finding optimal placement strategy

    Fault Tolerant Multitenant Database Server Consolidation

    Get PDF
    Server consolidation is important in situations where a sequence of database tenants need to be allocated (hosted) dynamically on a minimum number of cloud server machines. Given a tenant’s load defined by the amount of resources that the tenant requires and a service-level- agreement (SLA) between the tenant customer and the cloud service provider, resource cost savings can be achieved by consolidating multiple database tenants on server machines. Ad- ditionally, in realistic settings, server machines might fail causing their tenants to become un- available. To address this, service providers place multiple replicas of each tenant on different servers and reserve extra capacity to ensure that tenant failover will not result in overload on any remaining server. The focus of this thesis is on providing effective strategies for placing tenants on server machines so that the SLA requirements are met in the presence of failure of one or more servers. We propose the Cube-Fit (CUBEFIT ) algorithm for multitenant database server consolidation that saves resource costs by utilizing fewer servers than existing approaches for analytical workloads. Additionally, unlike existing consolidation algorithms, CUBEFIT can tolerate multiple server failures while ensuring that no server becomes overloaded. We provide extensive theoretical analysis and experimental evaluation of CUBEFIT. We show that compared to existing algorithms, the average case and worst case behavior of CUBEFIT is superior and that CUBEFIT produces near-optimal tenant allocation when the number of tenants is large. Through evaluation and deployment on a cluster of up to 73 machines as well as through simulation stud- ies, we experimentally demonstrate the efficacy of CUBEFIT in practical settings

    Alternative Approaches for Analysis of Bin Packing and List Update Problems

    Get PDF
    In this thesis we introduce and evaluate new algorithms and models for the analysis of online bin packing and list update problems. These are two classic online problems which are extensively studied in the literature and have many applications in the real world. Similar to other online problems, the framework of competitive analysis is often used to study the list update and bin packing algorithms. Under this framework, the behavior of online algorithms is compared to an optimal offline algorithm on the worst possible input. This is aligned with the traditional algorithm theory built around the concept of worst-case analysis. However, the pessimistic nature of the competitive analysis along with unrealistic assumptions behind the proposed models for the problems often result in situations where the existing theory is not quite useful in practice. The main goal of this thesis is to develop new approaches for studying online problems, and in particular bin packing and list update, to guide development of practical algorithms performing quite well on real-world inputs. In doing so, we introduce new algorithms with good performance (not only under the competitive analysis) as well as new models which are more realistic for certain applications of the studied problems. For many online problems, competitive analysis fails to provide a theoretical justification for observations made in practice. This is partially because, as a worst-case analysis method, competitive analysis does not necessarily reflect the typical behavior of algorithms. In the case of bin packing problem, the Best Fit and First Fit algorithms are widely used in practice. There are, however, other algorithms with better competitive ratios which are rarely used in practice since they perform poorly on average. We show that it is possible to optimize for both cases. In doing so, we introduce online bin packing algorithms which outperform Best Fit and First Fit in terms of competitive ratio while maintaining their good average-case performance. An alternative for analysis of online problems is the advice model which has received significant attention in the past few years. Under the advice model, an online algorithm receives a number of bits of advice about the unrevealed parts of the sequence. Generally, there is a trade-off between the size of the advice and the performance of online algorithms. The advice model generalizes the existing frameworks in which an online algorithm has partial knowledge about the input sequence, e.g., the access graph model for the paging problem. We study list update and bin packing problems under the advice model and answer several relevant questions about the advice complexity of these problems. Online problems are usually studied under specific settings which are not necessarily valid for all applications of the problem. As an example, online bin packing algorithms are widely used for server consolidation to minimize the number of active servers in a data center. In some applications, e.g., tenant placement in the Cloud, often a `fault-tolerant' solution for server consolidation is required. In this setting, the problem becomes different and the classic algorithms can no longer be used. We study a fault-tolerant model for the bin packing problem and analyze algorithms which fit this particular application of the problem. Similarly, the list update problem was initially proposed for maintaining self-adjusting linked lists. However, presently, the main application of the problem is in the data compression realm. We show that the standard cost model is not suitable for compression purposes and study a compression cost model for the list update problem. Our analysis justifies the advantage of the compression schemes which are based on Move-To-Front algorithm and might lead to improved compression algorithms

    Carbon-profit-aware job scheduling and load balancing in geographically distributed cloud for HPC and web applications

    Get PDF
    This thesis introduces two carbon-profit-aware control mechanisms that can be used to improve performance of job scheduling and load balancing in an interconnected system of geographically distributed data centers for HPC and web applications. These control mechanisms consist of three primary components that perform: 1) measurement and modeling, 2) job planning, and 3) plan execution. The measurement and modeling component provide information on energy consumption and carbon footprint as well as utilization, weather, and pricing information. The job planning component uses this information to suggest the best arrangement of applications as a possible configuration to the plan execution component to perform it on the system. For reporting and decision making purposes, some metrics need to be modeled based on directly measured inputs. There are two challenges in accurately modeling of these necessary metrics: 1) feature selection and 2) curve fitting (regression). First, to improve the accuracy of power consumption models of the underutilized servers, advanced fitting methodologies were used on the selected server features. The resulting model is then evaluated on real servers and is used as part of load balancing mechanism for web applications. We also provide an inclusive model for cooling system in data centers to optimize the power consumption of cooling system, which in turn is used by the planning component. Furthermore, we introduce another model to calculate the profit of the system based on the price of electricity, carbon tax, operational costs, sales tax, and corporation taxes. This model is used for optimized scheduling of HPC jobs. For position allocation of web applications, a new heuristic algorithm is introduced for load balancing of virtual machines in a geographically distributed system in order to improve its carbon awareness. This new heuristic algorithm is based on genetic algorithm and is specifically tailored for optimization problems of interconnected system of distributed data centers. A simple version of this heuristic algorithm has been implemented in the GSN project, as a carbon-aware controller. Similarly, for scheduling of HPC jobs on servers, two new metrics are introduced: 1) profitper-core-hour-GHz and 2) virtual carbon tax. In the HPC job scheduler, these new metrics are used to maximize profit and minimize the carbon footprint of the system, respectively. Once the application execution plan is determined, plan execution component will attempt to implement it on the system. Plan execution component immediately uses the hypervisors on physical servers to create, remove, and migrate virtual machines. It also executes and controls the HPC jobs or web applications on the virtual machines. For validating systems designed using the proposed modeling and planning components, a simulation platform using real system data was developed, and new methodologies were compared with the state-of-the-art methods considering various scenarios. The experimental results show improvement in power modeling of servers, significant carbon reduction in load balancing of web applications, and significant profit-carbon improvement in HPC job scheduling
    corecore