2,094 research outputs found

    PRIORITIZED TASK SCHEDULING IN FOG COMPUTING

    Get PDF
    Cloud computing is an environment where virtual resources are shared among the many users over network. A user of Cloud services is billed according to pay-per-use model associated with this environment. To keep this bill to a minimum, efficient resource allocation is of great importance. To handle the many requests sent to Cloud by the clients, the tasks need to be processed according to the SLAs defined by the client. The increase in the usage of Cloud services on a daily basis has introduced delays in the transmission of requests. These delays can cause clients to wait for the response of the tasks beyond the deadline assigned. To overcome these concerns, Fog Computing is helpful as it is physically placed closer to the clients. This layer is placed between the client and the Cloud layer, and it reduces the delay in the transmission of the requests, processing and the response sent back to the client greatly. This paper discusses an algorithm which schedules tasks by calculating the priority of a task in the Fog layer. The tasks with higher priority are processed first so that the deadline is met, which makes the algorithm practical and efficient

    Implementation of Latency by Using Distributed Load Balancing Algorithm for Logistics

    Get PDF
    Cloud computing provides a data center at any location in the world. There are so many resources used in a distributed data center that a logistics suppliers and users can buy, sell and rent like execution time, bandwidth, cost, storage and memory. The logistics suppliers and users do not need to know where the data center is located and how to operate or maintain their resources by using cloud computing. They only need to know how to connect to these resources and use of applications needed to perform their jobs. Many companies want to services by using its own local data centers. In logistics information system, information sharing, transfer mode of information using cloud service provider to different logistics partners is a big challenge. Every logistics user wants that information sharing should be in real time within minimum cost. The number of logistics partners and logistics users is growing and they need services for balancing the network traffic and load. The paper has different sections as follows: Section 2 discusses related works. Section 3 discusses scheduling and load balancing algorithms. Section 4 provides our proposed algorithm’s design. Section 5 provides experimental result and performance analysis. Section 6 concludes our research work and future work. Keywords: Cloud Computing, Logistics information system, Load balancing algorithms, DSB

    Experimental setup for investigating the efficient load balancing algorithms on virtual cloud

    Get PDF
    Cloud computing has emerged as the primary choice for developers in developing applications that require high-performance computing. Virtualization technology has helped in the distribution of resources to multiple users. Increased use of cloud infrastructure has led to the challenge of developing a load balancing mechanism to provide optimized use of resources and better performance. Round robin and least connections load balancing algorithms have been developed to allocate user requests across a cluster of servers in the cloud in a time-bound manner. In this paper, we have applied the round robin and least connections approach of load balancing to HAProxy, virtual machine clusters and web servers. The experimental results are visualized and summarized using Apache Jmeter and a further comparative study of round robin and least connections is also depicted. Experimental setup and results show that the round robin algorithm performs better as compared to the least connections algorithm in all measuring parameters of load balancer in this paper

    Dynamic Task Migration for Enhanced Load Balancing in Cloud Computing using K-means Clustering and Ant Colony Optimization

    Get PDF
    Cloud computing efficiently allocates resources, and timely execution of user tasks is pivotal for ensuring seamless service delivery. Central to this endeavour is the dynamic orchestration of task scheduling and migration, which collectively contribute to load balancing within virtual machines (VMs). Load balancing is a cornerstone, empowering clouds to fulfill user requirements promptly. To facilitate the migration of tasks, we propose a novel method that exploits the synergistic potential of K-means clustering and Ant Colony Optimization (ACO). Our approach aims to maximize the cloud ecosystem by improving several critical factors, such as the system's make time, resource utilization efficiency, and workload imbalance mitigation. The core objective of our work revolves around the reduction of makespan, a metric directly tied to the overall system performance. By strategically employing K-means clustering, we effectively group tasks with similar attributes, enabling the identification of prime candidates for migration. Subsequently, the ACO algorithm takes the reins, orchestrating the migration process with an inherent focus on achieving global optimization. The multifaceted benefits of our approach are quantitatively assessed through comprehensive comparisons with established algorithms, namely Round Robin (RR), First-Come-First-Serve (FCFS), Shortest Job First (SJF), and a genetic load balancing algorithm. To facilitate this evaluation, we harness the capabilities of the CloudSim simulation tool, which provides a platform for realistic and accurate performance analysis. Our research enhances cloud computing paradigms by harmonizing task migration with innovative optimization techniques. The proposed approach demonstrates its prowess in harmonizing diverse goals: reducing makespan, elevating resource utilization efficiency, and attenuating the degree of workload imbalance. These outcomes collectively pave the way for a more responsive and dependable cloud infrastructure primed to cater to user needs with heightened efficacy. Our study delves into the intricate domain of cloud-based task scheduling and migration. By synergizing K-means clustering and ACO algorithms, we introduce a dynamic methodology that refines cloud resource management and bolsters the quintessential facet of load balancing. Through rigorous comparisons and meticulous analysis, we underscore the superior attributes of our approach, showcasing its potential to reshape the landscape of cloud computing optimization

    Cloud service analysis using round-robin algorithm for quality-of-service aware task placement for internet of things services

    Get PDF
    Round-robin (RR) is a process approach to sharing resources that requires each user to get a turn using them in an agreed order in cloud computing. It is suited for time-sharing systems since it automatically reduces the problem of priority inversion, which are low-priority tasks delayed. The time quantum is limited, and only a one-time quantum process is allowed in round-robin scheduling. The objective of this research is to improve the functionality of the current RR method for scheduling actions in the cloud by lowering the average waiting, turnaround, and response time. CloudAnalyst tool was used to enhance the RR technique by changing the parameter value in optimizing the high accuracy and low cost. The result presents the achieved overall min and max response times are 36.69 and 650.30 ms for running 300 min RR. The cost for the virtual machines (VMs) is identified from 0.5to0.5 to 3. The longer the time used, the higher the cost of the data transfer. This research is significant in improving communication and the quality of relationships within groups

    Dynamic Load Balancing Algorithms For Cloud Computing

    Get PDF
    In cloud computing, the load balancing is one of the major requirment. Load is nothing but the of the amount of work that a system performs. Load can be classified as CPU load, memory size and network load. Load balancing is the process of dividing the task among various nodes of a distributed system to improve both resource utilization and job response time. Also avoiding a situation where some of the nodes are heavily loaded and others are idle. Load balancing ensures that every node in the network having equal amount of work (as per their capacity) at any instant of time. In This paper we survey the existing load balancing algorithms for a cloud based environment. DOI: 10.17762/ijritcc2321-8169.150612

    An optimized Load Balancing Technique for Virtual Machine Migration in Cloud Computing

    Get PDF
    Cloud computing (CC) is a service that uses subscription storage & computing power. Load balancing in distributed systems is one of the most critical pieces. CC has been a very interesting and important area of research because CC is one of the best systems that stores data with reduced costs and can be viewed over the internet at all times. Load balance facilitates maintaining high user retention & resource utilization by ensuring that each computing resource is correctly and properly distributed. This paper describes cloud-based load balancing systems. CC is virtualization of hardware like storage, computing, and security by virtual machines (VM). The live relocation of these machines provides many advantages, including high availability, hardware repair, fault tolerance, or workload balancing. In addition to various VM migration facilities, during the migration process, it is subject to significant security risks which the industry hesitates to accept. In this paper we have discussed CC besides this we also emphasize various existing load balancing algorithms, advantages& also we describe the PSO optimization technique

    Automated Experiments for Deriving Performance-relevant Properties of Software Execution Environments

    Get PDF
    The execution environment can play a crucial role when analyzing the performance of a software system. However, detecting execution environment properties and integrating such properties into performance analyses is a manual, error-prone task. In this thesis, a novel approach for detecting performance-relevant properties of the software execution environment is presented. These properties are automatically detected using predefined experiments and integrated into performance prediction tools

    On Solving Some Issues in Cloud Computing

    Get PDF
    In past few years, cloud computing has emerged as one of the fastest growing segment in IT industry. It delivers infrastructure, platform, and software as a service on demand basis. Cloud provides several data centers at different geographical locations for service reliability and availability. Users can deploy applications and subscribe services from any location at competitive cost. However, this system doesn’t support mechanism and policies for dynamically coordinating load distribution among different cloud-based data centers. Further, cloud providers are unable to predict geographical distribution of users availing this services. There exist many challenging issues but few of them such as load balancing, event matching, and real-time data analysis have been addressed in the thesis. First three contributions in this thesis are dedicated to load balancing using evolutionary techniques. In the first contribution, a genetic algorithm based load balancing (LBGA) has been proposed with real value coded GA with a new encoding mechanism. Similarly, a particle swarm optimization based load balancing (LBPSO) is suggested. Both the schemes are simulated in cloud analyst, and performance comparisons are made with the competitive schemes.Consequently, both the schemes are grouped together to form a hybrid load balancing algorithm (HLBA). HLBA based central load balancer balances the load among virtual machines in cloud data center. HLBA utilizes the benefits of both genetic algorithm and particle swarm optimization. Different measures such as average response time, data center request service time, virtual machine cost, and data transfer cost are considered to evaluate the performance of the proposed algorithm. Suggested approach achieves better load balancing in large scale cloud computing environment as compared to other competitive approaches. In another contribution, an event matching algorithm has been developed for content-based event dissemination in publish/subscribe system. Proposed modified rapid match (MRM) algorithm has been compared with existing heuristics in the cloud system. Finally, a framework for the sensor-cloud environment for patient monitoring has been suggested. A prototype model has been developed for the purpose to validate the framework. This integrated system helps in monitoring, analyzing, and delivering real-time information on the fly
    corecore