36,314 research outputs found

    An Optimization of Energy Saving in Cloud Environment

    Get PDF
    Cloud computing is a technology in distributed computing which facilitates pay per model based on user demand and requirement. Cloud can be defined as a collection of virtual machines. This includes both computational and storage facility. The goal of cloud computing is to provide efficient access to remote and geographically distributed resources. Cloud Computing is developing day by day and faces many challenges; one of them is i) Load Balancing and ii) Task scheduling. Load balancing is defined as division of the amount of work that a system has to do between two or more systems so that more work gets done in the same amount of time and all users get served faster. Load balancing can be implemented with hardware, software, or a combination of both. Load balancing is mainly used for server clustering. Task Scheduling is a set of policies to control the work order to be performed by a system. It is also a technique which is used to improve the overall execution time of the job. Task Scheduling is responsible for selection of best suitable resources for task execution, by taking some parameters into consideration. A good task scheduler adapts its scheduling strategy according to the changing environment and the type of task. In this paper, the Energy Saving Load Balancing (ESLB) Algorithm and Energy Saving Task Scheduling (ESTS) algorithm was proposed. The various scheduling algorithms (FCFS, RR, PRIORITY, and SJF) are reviewed and compared. The ESLB algorithm and ESTS algorithm was tested in cloudsim toolkit and the result shows better performance

    MicroFuge: A Middleware Approach to Providing Performance Isolation in Cloud Storage Systems

    Get PDF
    Abstract-Most cloud providers improve resource utilization by having multiple tenants share the same resources. However, this comes at the cost of reduced isolation between tenants, which can lead to inconsistent and unpredictable performance. This performance variability is a significant impediment for tenants running services with strict latency deadlines. Providing predictable performance is particularly important for cloud storage systems. The storage system is the performance bottleneck for many cloud-based services and therefore often determines their overall performance characteristics. In this paper, we introduce MicroFuge, a new distributed caching and scheduling middleware that provides performance isolation for cloud storage systems. MicroFuge addresses the performance isolation problem by building an empiricallydriven performance model of the underlying storage system based on measured data. Using this model, MicroFuge reduces deadline misses through adaptive deadline-aware cache eviction, scheduling and load-balancing policies. MicroFuge can also perform early rejection of requests that are unlikely to make their deadlines. Using workloads from the YCSB benchmark on an EC2 deployment, we show that adding MicroFuge to the storage stack substantially reduces the deadline miss rate of a distributed storage system compared to using a deadline oblivious distributed caching middleware such as Memcached

    Efficient Load Balancing for Cloud Computing by Using Content Analysis

    Get PDF
    Nowadays, computer networks have grown rapidly due to the demand for information technology management and facilitation of greater functionality. The service provided based on a single machine cannot accommodate large databases. Therefore, single servers must be combined for server group services. The problem in grouping server service is that it is very hard to manage many devices which have different hardware. Cloud computing is an extensive scalable computing infrastructure that shares existing resources. It is a popular option for people and businesses for a number of reasons including cost savings and security. This paper aimed to propose an efficient technique of load balance control by using HA Proxy in cloud computing with the objective of receiving and distributing the workload to the computer server to share the processing resources. The proposed technique applied round-robin scheduling for an efficient resource management of the cloud storage systems that focused on an effective workload balancing and a dynamic replication strategy. The evaluation approach was based on the benchmark data from requests per second and failed requests. The results showed that the proposed technique could improve performance of load balancing by 1,000 request /6.31 sec in cloud computing and generate fewer false alarm

    A WOA-based optimization approach for task scheduling in cloud Computing systems

    Get PDF
    Task scheduling in cloud computing can directly affect the resource usage and operational cost of a system. To improve the efficiency of task executions in a cloud, various metaheuristic algorithms, as well as their variations, have been proposed to optimize the scheduling. In this work, for the first time, we apply the latest metaheuristics WOA (the whale optimization algorithm) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that basis, we propose an advanced approach called IWC (Improved WOA for Cloud task scheduling) to further improve the optimal solution search capability of the WOA-based method. We present the detailed implementation of IWC and our simulation-based experiments show that the proposed IWC has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms. Moreover, it can also achieve better performance on system resource utilization, in the presence of both small and large-scale tasks

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Energy and performance-optimized scheduling of tasks in distributed cloud and edge computing systems

    Get PDF
    Infrastructure resources in distributed cloud data centers (CDCs) are shared by heterogeneous applications in a high-performance and cost-effective way. Edge computing has emerged as a new paradigm to provide access to computing capacities in end devices. Yet it suffers from such problems as load imbalance, long scheduling time, and limited power of its edge nodes. Therefore, intelligent task scheduling in CDCs and edge nodes is critically important to construct energy-efficient cloud and edge computing systems. Current approaches cannot smartly minimize the total cost of CDCs, maximize their profit and improve quality of service (QoS) of tasks because of aperiodic arrival and heterogeneity of tasks. This dissertation proposes a class of energy and performance-optimized scheduling algorithms built on top of several intelligent optimization algorithms. This dissertation includes two parts, including background work, i.e., Chapters 3–6, and new contributions, i.e., Chapters 7–11. 1) Background work of this dissertation. Chapter 3 proposes a spatial task scheduling and resource optimization method to minimize the total cost of CDCs where bandwidth prices of Internet service providers, power grid prices, and renewable energy all vary with locations. Chapter 4 presents a geography-aware task scheduling approach by considering spatial variations in CDCs to maximize the profit of their providers by intelligently scheduling tasks. Chapter 5 presents a spatio-temporal task scheduling algorithm to minimize energy cost by scheduling heterogeneous tasks among CDCs while meeting their delay constraints. Chapter 6 gives a temporal scheduling algorithm considering temporal variations of revenue, electricity prices, green energy and prices of public clouds. 2) Contributions of this dissertation. Chapter 7 proposes a multi-objective optimization method for CDCs to maximize their profit, and minimize the average loss possibility of tasks by determining task allocation among Internet service providers, and task service rates of each CDC. A simulated annealing-based bi-objective differential evolution algorithm is proposed to obtain an approximate Pareto optimal set. A knee solution is selected to schedule tasks in a high-profit and high-quality-of-service way. Chapter 8 formulates a bi-objective constrained optimization problem, and designs a novel optimization method to cope with energy cost reduction and QoS improvement. It jointly minimizes both energy cost of CDCs, and average response time of all tasks by intelligently allocating tasks among CDCs and changing task service rate of each CDC. Chapter 9 formulates a constrained bi-objective optimization problem for joint optimization of revenue and energy cost of CDCs. It is solved with an improved multi-objective evolutionary algorithm based on decomposition. It determines a high-quality trade-off between revenue maximization and energy cost minimization by considering CDCs’ spatial differences in energy cost while meeting tasks’ delay constraints. Chapter 10 proposes a simulated annealing-based bees algorithm to find a close-to-optimal solution. Then, a fine-grained spatial task scheduling algorithm is designed to minimize energy cost of CDCs by allocating tasks among multiple green clouds, and specifies running speeds of their servers. Chapter 11 proposes a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are met in cloud-edge computing systems. A single-objective constrained optimization problem is solved by a proposed simulated annealing-based migrating birds optimization. This dissertation evaluates these algorithms, models and software with real-life data and proves that they improve scheduling precision and cost-effectiveness of distributed cloud and edge computing systems
    • 

    corecore