17,354 research outputs found

    Utilizing Divisible Load Scheduling Theorem in Round Robin Algorithm for Load Balancing In Cloud Environment

    Get PDF
    Cloud Computing is a newly paradigm in computing that promises a shift from an organization required to invest heavily for limited IT resources that are internally managed, to a model where the organization can buy or rent resources that are managed by a cloud provider, and pay peruse. With the fast growing of cloud computing one of the areas that is paramount to cloud computing service providers is the establishment of an effective load balancing algorithm that assigns tasks to best Virtual Machines(VM) in such a way that it provides satisfactory performance to both, cloud users and providers. Among these load balancing algorithms in cloud environment Round Robin (RR) algorithm is one of them. In this paper firstly analysis of various Round Robin load balancing algorithms is done. Secondly, anew Virtual Machines (VM) load balancing algorithm has been proposed and implemented; i.e. ‘Divisible Weighted Round Robin(DWRR) Load Balancing Algorithm’. This proposed load balancing algorithm utilizes the Divisible Load Scheduling Theorem in the Round Robin load balancing algorithm. In order to evaluate the performance of this proposed algorithm (DWRR) the researcher used a simulator called CloudSim tool to conduct a test on the performances between the proposed algorithm (DWRR) and the types of Round Robin algorithms. After a thoroughly comparison between these algorithms, the results showed that DWRR outperforms the various types of Round Robin(Weighted Round Robin and Round Robin with server affinity )algorithms in terms of execution time (makespan) with the least complexity. Keywords: Scheduling Algorithm, performance, cloud computing, load balancing algorithm, Divisible Load scheduling Theory

    An Optimization of Energy Saving in Cloud Environment

    Get PDF
    Cloud computing is a technology in distributed computing which facilitates pay per model based on user demand and requirement. Cloud can be defined as a collection of virtual machines. This includes both computational and storage facility. The goal of cloud computing is to provide efficient access to remote and geographically distributed resources. Cloud Computing is developing day by day and faces many challenges; one of them is i) Load Balancing and ii) Task scheduling. Load balancing is defined as division of the amount of work that a system has to do between two or more systems so that more work gets done in the same amount of time and all users get served faster. Load balancing can be implemented with hardware, software, or a combination of both. Load balancing is mainly used for server clustering. Task Scheduling is a set of policies to control the work order to be performed by a system. It is also a technique which is used to improve the overall execution time of the job. Task Scheduling is responsible for selection of best suitable resources for task execution, by taking some parameters into consideration. A good task scheduler adapts its scheduling strategy according to the changing environment and the type of task. In this paper, the Energy Saving Load Balancing (ESLB) Algorithm and Energy Saving Task Scheduling (ESTS) algorithm was proposed. The various scheduling algorithms (FCFS, RR, PRIORITY, and SJF) are reviewed and compared. The ESLB algorithm and ESTS algorithm was tested in cloudsim toolkit and the result shows better performance

    Cloud computing—effect of evolutionary algorithm on load balancing

    Full text link
    © Springer International Publishing Switzerland 2015 In cloud computing due to the multi-tenancy of the resources, there is an essential need for effective load management to ensure an efficient load sharing. Depends on the structure of the tasks, different algorithms could be applied to distribute the load. Workflow scheduling as one of those load distribution algorithms, is specifically designed to schedule the dependent tasks on available resources. Considering a job as an elastic network of dependent tasks, this paper describes how evolutionary algorithm, with its mathematical apparatus, could be applied as workflow scheduling in cloud computing. In this research, the impact of Generalized Spring Tensor Model on workflow load balancing, in context of mathematical patterns have been studied. This research can establish patterns in cloud computing which can be applied in designing the heuristic workflow load balancing algorithms to identify the load patterns of the cloud network. Furthermore, the outcome of this research can help the end users to recognize the threats of tasks failure in processing the e-business and e-since data in cloud environment

    Cost-Effective Scheduling and Load Balancing Algorithms in Cloud Computing Using Learning Automata

    Get PDF
    Cloud computing is a distributed computing model in which access is based on demand. A cloud computing environment includes a wide variety of resource suppliers and consumers. Hence, efficient and effective methods for task scheduling and load balancing are required. This paper presents a new approach to task scheduling and load balancing in the cloud computing environment with an emphasis on the cost-efficiency of task execution through resources. The proposed algorithms are based on the fair distribution of jobs between machines, which will prevent the unconventional increase in the price of a machine and the unemployment of other machines. The two parameters Total Cost and Final Cost are designed to achieve the mentioned goal. Applying these two parameters will create a fair basis for job scheduling and load balancing. To implement the proposed approach, learning automata are used as an effective and efficient technique in reinforcement learning. Finally, to show the effectiveness of the proposed algorithms we conducted simulations using CloudSim toolkit and compared proposed algorithms with other existing algorithms like BCO, PES, CJS, PPO and MCT. The proposed algorithms can balance the Final Cost and Total Cost of machines. Also, the proposed algorithms outperform best existing algorithms in terms of efficiency and imbalance degree

    Improved QoS with Fog computing based on Adaptive Load Balancing Algorithm

    Get PDF
    As the number of sensing devices rises, traffic on the cloud servers is boosting day by day. When a device connected to the IoTwants access to data, cloud computing encourages the pairing of fog & cloud nodes to provide that information. One of the key needs in a fog-based cloud system, is efficient job scheduling to decrease the data delay and improve the QoS (Quality of Service). The researchers have used a variety of strategies to maintain the QoS criteria. However, because of the increased service delay caused by the busty traffic, job scheduling is impacted which leads to the unbalanced load on the fog environment. The proposed work uses a novel model which curates the features and working style of Genetic algorithm and the optimization algorithm with the load balancing scheduling on the fog nodes. The performance of the proposed hybrid model is contrasted with the other well-known algorithms in contrast to the fundamental benchmark optimization test functions. The proposed work displays better results in sustaining the task scheduling process when compared to the existing algorithms, which include Round Robin (RR) method, Hybrid RR, Hybrid Threshold based and Hybrid Predictive Based models, which ensures the efficacy of the proposed load balancing model to improve the quality of service in fog environment

    Generalized Spring Tensor Model: A New Improved Load Balancing Method in Cloud Computing

    Full text link
    Significant characteristics of cloud computing such as elasticity, scalability and payment model attract businesses to replace their legacy infrastructure with the newly offered cloud technologies. As the number of the cloud users is growing rapidly, extensive load volume will affect performance and operation of the cloud. Therefore, it is essential to develop smarter load management methods to ensure effective task scheduling and efficient management of resources. In order to reach these goals, varieties of algorithms have been explored and tested by many researchers. But so far, not many operational load balancing algorithms have been proposed that are capable of forecasting the future load patterns in cloud-based systems. The aim of this research is to design an effective load management tool, characterized by collective behavior of the workflow tasks and jobs that is able to predict various dynamic load patterns occurring in cloud networks. The results show that the proposed new load balancing algorithm can visualize the network load by projecting the existing relationships among submitted tasks and jobs. The visualization can be particularly useful in terms of monitoring the robustness and stability of the cloud systems. © Springer International Publishing Switzerland 2015

    Optimizing Cloud Computing Applications with a Data Center Load Balancing Algorithm

    Get PDF
    Delivering scalable and on-demand computing resources to users through the usage of the cloud has become a common paradigm. The issues of effective resource utilisation and application performance optimisation, however, become more pressing as the demand for cloud services rises. In order to ensure efficient resource allocation and improve application performance, load balancing techniques are essential in dispersing incoming network traffic over several servers. The workload balancing in the context of cloud computing, particularly in the Infrastructure as a Service (IaaS) model, continues to be difficult. Due to available virtual machines and the limited resources, efficient job allocation is essential. To prevent prolonged execution delays or machine breakdowns, cloud service providers must maintain excellent performance and avoid overloading or underloading hosts. The importance of task scheduling in load balancing necessitates compliance with Service Level Agreement (SLA) standards established by cloud developers for consumers. The suggested technique takes into account Quality of Service (QoS) job parameters, VM priorities, and resource allocation in order to maximise resource utilisation and improve load balancing. The proposed load balancing method is in line with the results in the body of existing literature by resolving these problems and the current research gap. According to experimental findings, the Dynamic LBA algorithm currently in use is outperformed by an average resource utilisation of 78%. The suggested algorithm also exhibits excellent performance in terms of accelerated Makespan and decreased execution time

    A comparison of resource allocation process in grid and cloud technologies

    Get PDF
    Grid Computing and Cloud Computing are two different technologies that have emerged to validate the long-held dream of computing as utilities which led to an important revolution in IT industry. These technologies came with several challenges in terms of middleware, programming model, resources management and business models. These challenges are seriously considered by Distributed System research. Resources allocation is a key challenge in both technologies as it causes the possible resource wastage and service degradation. This paper is addressing a comprehensive study of the resources allocation processes in both technologies. It provides the researchers with an in-depth understanding of all resources allocation related aspects and associative challenges, including: load balancing, performance, energy consumption, scheduling algorithms, resources consolidation and migration. The comparison also contributes an informal definition of the Cloud resource allocation process. Resources in the Cloud are being shared by all users in a time and space sharing manner, in contrast to dedicated resources that governed by a queuing system in Grid resource management. Cloud Resource allocation suffers from extra challenges abbreviated by achieving good load balancing and making right consolidation decision

    5G with Fog Computing based Privacy System in Data Analytics for Healthcare System by AI Techniques

    Get PDF
    Fog computing architecture is an extended version of the cloud computing architecture to reduce the load of the data transmission and storage in the cloud platform. The architecture of the fog increases the performance with improved efficiency compared with the cloud environment. The fog computing architecture uses the 5G based Artificial Intelligence (AI) technology for performance enhancement. However, due to vast range of data availability privacy is challenging in the fog environment. This paper proposed a Medical Fog Computing Load Scheduling (MFCLS) model for data privacy enhancement. The developed architecture model of optimization-based delay scheduling for task assignment in the fog architecture. The healthcare data were collected and processed with the 5G technology. The developed MFCLS model uses the entropy-based feature selection for the healthcare data. The proposed MFCLS considers the total attributes of 13 for the evaluation of features. With the provision of service level violation, the fog computing network architecture will be provided with reduced energy consumption. The developed load balancing reduced the service violation count with the provision of desired data privacy in the fog model. The estimation of the time frame is minimal for the proposed MFCLS model compared with the existing DAG model. The performance analysis expressed that SLRVM and ECRVM achieved by the proposed MFCLS are 28 and 43 respectively. The comparative examination of the proposed MFCLS model with the existing DAG model expressed that the proposed model exhibits ~6% performance enhancement in the data privacy for the healthcare data
    corecore