2 research outputs found

    Cloud Computing Systems Exploration over Workload Prediction Factor in Distributed Applications

    Get PDF
    This paper highlights the different techniques of workload prediction in cloud computing. Cloud computing resources have a special kind of arrangement in which resources are made available on demand to the customers. Today, most of the organizations are using cloud computing that results in reduction of the operational cost. Cloud computing also reduces the overhead of any organization due to implementation of many hardware and software platforms. These services are being provided by cloud provider on the basis of pay per use. There are lots of cloud service providers in the modern era. In this competitive era, every cloud provider works to provide better services to the customer. To fulfill the customer?s requirements, dynamic provisioning can serve the purpose in cloud system where resources can be released and allocated on later stage as per needs. That?s why resource scaling becomes a great challenge for the cloud providers. There are many approaches to scale the number of instances of any resource. Two main approaches namely: proactive and reactive are used in cloud systems. Reactive approach reacts at later stage while proactive approach predicts resources in advance. Cloud provider needs to predict the number of resources in advance that an application is intended to use. Historical data and patterns can be used for the workload prediction. The benefit of the proactive approach lies in advance number of instances of a resource available for the future use. This results in improved performance for the cloud systems

    Service-Level-Driven Load Scheduling and Balancing in Multi-Tier Cloud Computing

    Get PDF
    Cloud computing environments often deal with random-arrival computational workloads that vary in resource requirements and demand high Quality of Service (QoS) obligations. A Service Level Agreement (SLA) is employed to govern the QoS obligations of the cloud service provider to the client. A service provider conundrum revolves around the desire to maintain a balance between the limited resources available for computing and the high QoS requirements of the varying random computing demands. Any imbalance in managing these conflicting objectives may result in either dissatisfied clients that can incur potentially significant commercial penalties, or an over-sourced cloud computing environment that can be significantly costly to acquire and operate. To optimize response to such client demands, cloud service providers organize the cloud computing environment as a multi-tier architecture. Each tier executes its designated tasks and passes them to the next tier, in a fashion similar, but not identical, to the traditional job-shop environments. Each tier consists of multiple computing resources, though an optimization process must take place to assign and schedule the appropriate tasks of the job on the resources of the tier, so as to meet the job’s QoS expectations. Thus, scheduling the clients’ workloads as they arrive at the multi-tier cloud environment to ensure their timely execution has been a central issue in cloud computing. Various approaches have been presented in the literature to address this problem: Join-Shortest-Queue (JSQ), Join-Idle-Queue (JIQ), enhanced Round Robin (RR) and Least Connection (LC), as well as enhanced MinMin and MaxMin, to name a few. This thesis presents a service-level-driven load scheduling and balancing framework for multi-tier cloud computing. A model is used to quantify the penalty a cloud service provider incurs as a function of the jobs’ total waiting time and QoS violations. This model facilitates penalty mitigation in situations of high demand and resource shortage. The framework accounts for multi-tier job execution dependencies in capturing QoS violation penalties as the client jobs progress through subsequent tiers, thus optimizing the performance at the multi-tier level. Scheduling and balancing operations are employed to distribute client jobs on resources such that the total waiting time and, hence, SLA violations of client jobs are minimized. Optimal job allocation and scheduling is an NP combinatorial problem. The dynamics of random job arrival make the optimality goal even harder to achieve and maintain as new jobs arrive at the environment. Thus, the thesis proposes a queue virtualization as an abstract that allows jobs to migrate between resources within a given tier, as well, altering the sequencing of job execution within a given resource, during the optimization process. Given the computational complexity of the job allocation and scheduling problem, a genetic algorithm is proposed to seek optimal solutions. The queue virtualization is proposed as a basis for defining chromosome structure and operations. As computing jobs tend to vary with respect to delay tolerance, two SLA scenarios are tackled, that is, equal cost of time delays and differentiated cost of time delays. Experimental work is conducted to investigate the performance of the proposed approach both at the tier and system level
    corecore