350 research outputs found

    Load Balancing of Tasks on Cloud Computing Using Time Complexity of Proposed Algorithm

    Get PDF
    Cloud Computing is a developing field and lean toward by numerous one at current yet it's rage is part more rely upon its execution which thusly is excessively rely upon the powerful booking algorithm and load adjusting . In this paper we address this issue and propose an algorithm for private cloud which has high throughput and for open cloud which address the issue of condition awareness likewise with execution. To enhance the throughput in private cloud SJF is utilized for planning and to conquer shape the issue of starvation we utilize limited pausing. For stack adjusting we screen the heap and dispatch the activity to the minimum stacked VM. To acquire advantage and to have open door for future upgrade out in the open cloud condition cognizance is the key factor and for better execution and load adjusting likewise wanted. While stack adjusting enhances the execution, the earth awareness increment the benefit of cloud suppliers

    Demand-driven Gaussian window optimization for executing preferred population of jobs in cloud clusters

    Get PDF
    Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency

    Contents

    Get PDF

    Deadline-Aware Reservation-Based Scheduling

    Get PDF
    The ever-growing need to improve return-on-investment (ROI) for cluster infrastructure that processes data which is being continuously generated at a higher rate than ever before introduces new challenges for big-data processing frameworks. Highly complex mixed workload arriving at modern clusters along with a growing number of time-sensitive critical production jobs necessitates cluster management systems to evolve. Most big-data systems are not only required to guarantee that production jobs will complete before their deadline, but also minimize the latency for best-effort jobs to increase ROI. This research presents DARSS, a deadline-aware reservation-based scheduling system. DARSS addresses the above-stated problem by using a reservation-based approach to scheduling that supports temporal requirements of production jobs while keeping the latency for best-effort jobs low. Fined-grained resource allocation enables DARSS to schedule more tasks than a coarser-grained approach would. Furthermore, DARSS schedules production jobs as close to their deadlines as possible. This scheduling policy allows the system to maximize the number of low-priority tasks that can be scheduled opportunistically. DARSS is a scalable system that can be integrated with YARN. DARSS is evaluated on a simulated cluster of 300 nodes against a workload derived from Google Borg's trace. DARSS is compared with Microsoft's Rayon and YARN's built-in scheduler. DARSS achieves better production job acceptance rate than both YARN and Rayon. The experiments show that all of the production jobs accepted by DARSS complete before their deadlines. Furthermore, DARSS has a higher number of best-effort jobs serviced than Rayon. And finally, DARSS has lower latency for best-effort jobs than Rayon

    A framework for joint resource allocation of MapReduce and web service applications in a shared cloud cluster

    Get PDF
    The ongoing uptake of cloud-based solutions by different business domains and the rise of cross-border e-commerce in the EU require for additional public and private cloud solutions. Private clouds are an alternative for e-commerce sites to host not only Web Service (WS) applications but also Business Intelligence ones that consist of batch and/or interactive queries and resort to the MapReduce (MR) programming model. In this study, we take the perspective of an e-commerce site hosting its WS and MR applications on a fixed-size private cloud cluster. We assume Quality of Service (QoS) guarantees must be provided to end-users, represented by upper-bounds on the average response times of WS requests and on the MR jobs execution times, as MR applications can be interactive nowadays. We consider multiple MR and WS user classes with heterogeneous workload intensities and QoS requirements. Being the cluster capacity fixed, some requests may be rejected at heavy load, for which penalty costs are incurred. We propose a framework to jointly optimize resource allocation for WS and MR applications hosted in a private cloud with the aim to increase cluster utilization and reduce its operational and penalty costs. The optimization problem is formulated as a non linear mathematical programming model. Applying the KKT conditions, we derive an equivalent problem that can be solved efficiently by a greedy procedure. The proposed framework increases cluster utilization by up to 18% while cost savings go up to 50% compared to a priori partitioning the cluster resources between the two workload types

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Decentralised Workload Scheduler for Resource Allocation in Computational Clusters

    Get PDF
    This paper presents a detailed design of a decentralised agent-based scheduler, which can be used to manage workloads within the computing cells of a Cloud system. Our proposed solution is based on the concept of service allocation negotiation, whereby all system nodes communicate between themselves, and scheduling logic is decentralised. The presented architecture has been implemented, with multiple simulations run using real-world workload traces from the Google Cluster Data project. The results were then compared to the scheduling patterns of Google’s Borg system

    Control over the Cloud : Offloading, Elastic Computing, and Predictive Control

    Get PDF
    The thesis studies the use of cloud native software and platforms to implement critical closed loop control. It considers technologies that provide low latency and reliable wireless communication, in terms of edge clouds and massive MIMO, but also approaches industrial IoT and the services of a distributed cloud, as an extension of commercial-of-the-shelf software and systems.First, the thesis defines the cloud control challenge, as control over the cloud and controller offloading. This is followed by a demonstration of closed loop control, using MPC, running on a testbed representing the distributed cloud.The testbed is implemented using an IoT device, clouds, next generation wireless technology, and a distributed execution platform. Platform details are provided and feasibility of the approach is shown. Evaluation includes relocating an on-line MPC to various locations in the distributed cloud. Offloaded control is examined next, through further evaluation of cloud native software and frameworks. This is followed by three controller designs, tailored for use with the cloud. The first controller solves MPC problems in parallel, to implement a variable horizon controller. The second is a hierarchical design, in which rate switching is used to implement constrained control, with a local and a remote mode. The third design focuses on reliability. Here, the MPC problem is extended to include recovery paths that represent a fallback mode. This is used by a control client if it experiences connectivity issues.An implementation is detailed and examined.In the final part of the thesis, the focus is on latency and congestion. A cloud control client can experience long and variable delays, from network and computations, and used services can become overloaded. These problems are approached by using predicted control inputs, dynamically adjusting the control frequency, and using horizontal scaling of the cloud service. Several examples are shown through simulation and on real clouds, including admitting control clients into a cluster that becomes temporarily overloaded

    LOAD PREDICTION AND BALANCING FOR CLOUD-BASED VOICE-OVER-IP SOLUTIONS

    Get PDF
    • …
    corecore