3,107 research outputs found

    Load Balancing in Cloud Computing: A Survey on Popular Techniques and Comparative Analysis

    Get PDF
    Cloud Computing is universally accepted as the most intensifying field in web technologies today. With the increasing popularity of the cloud, popular website2019;s servers are getting overloaded with high request load by users. One of the main challenges in cloud computing is Load Balancing on servers. Load balancing is the procedure of sharing the load between multiple processors in a distributed environment to minimize the turnaround time taken by the servers to cater service requests and make better utilization of the available resources. It greatly helps in scenarios where there is misbalance of workload on the servers as some machines may get heavily loaded while others remain under-loaded or idle. Load balancing methods make sure that every VM or server in the network holds workload equilibrium and load as per their capacity at any instance of time. Static and Dynamic load balancing are main techniques for balancing load on servers. This paper presents a brief discussion on different load balancing schemes and comparison between prime techniques

    A Hybrid Optimization Algorithm for Efficient Virtual Machine Migration and Task Scheduling Using a Cloud-Based Adaptive Multi-Agent Deep Deterministic Policy Gradient Technique

    Get PDF
    This To achieve optimal system performance in the quickly developing field of cloud computing, efficient resource management—which includes accurate job scheduling and optimized Virtual Machine (VM) migration—is essential. The Adaptive Multi-Agent System with Deep Deterministic Policy Gradient (AMS-DDPG) Algorithm is used in this study to propose a cutting-edge hybrid optimization algorithm for effective virtual machine migration and task scheduling. An sophisticated combination of the War Strategy Optimization (WSO) and Rat Swarm Optimizer (RSO) algorithms, the Iterative Concept of War and Rat Swarm (ICWRS) algorithm is the foundation of this technique. Notably, ICWRS optimizes the system with an amazing 93% accuracy, especially for load balancing, job scheduling, and virtual machine migration. The VM migration and task scheduling flexibility and efficiency are greatly improved by the AMS-DDPG technology, which uses a powerful combination of deterministic policy gradient and deep reinforcement learning. By assuring the best possible resource allocation, the Adaptive Multi-Agent System method enhances decision-making even more. Performance in cloud-based virtualized systems is significantly enhanced by our hybrid method, which combines deep learning and multi-agent coordination. Extensive tests that include a detailed comparison with conventional techniques verify the effectiveness of the suggested strategy. As a consequence, our hybrid optimization approach is successful. The findings show significant improvements in system efficiency, shorter job completion times, and optimum resource utilization. Cloud-based systems have unrealized potential for synergistic optimization, as shown by the integration of ICWRS inside the AMS-DDPG framework. Enabling a high-performing and sustainable cloud computing infrastructure that can adapt to the changing needs of modern computing paradigms is made possible by this strategic resource allocation, which is attained via careful computational utilization

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    Fog computing scheduling algorithm for smart city

    Get PDF
    With the advent of the number of smart devices across the globe, increasing the number of users using the Internet. The main aim of the fog computing (FC) paradigm is to connect huge number of smart objects (billions of object) that can make a bright future for smart cities. Due to the large deployments of smart devices, devices are expected to generate huge amounts of data and forward the data through the Internet. FC also refers to an edge computing framework that mitigates the issue by applying the process of knowledge discovery using a data analysis approach to the edges. Thus, the FC approaches can work together with the internet of things (IoT) world, which can build a sustainable infrastructure for smart cities. In this paper, we propose a scheduling algorithm namely the weighted round-robin (WRR) scheduling algorithm to execute the task from one fog node (FN) to another fog node to the cloud. Firstly, a fog simulator is used with the emergent concept of FC to design IoT infrastructure for smart cities. Then, spanning-tree routing (STP) protocol is used for data collection and routing. Further, 5G networks are proposed to establish fast transmission and communication between users. Finally, the performance of our proposed system is evaluated in terms of response time, latency, and amount of data used

    Resource Allocation in Networking and Computing Systems: A Security and Dependability Perspective

    Get PDF
    In recent years, there has been a trend to integrate networking and computing systems, whose management is getting increasingly complex. Resource allocation is one of the crucial aspects of managing such systems and is affected by this increased complexity. Resource allocation strategies aim to effectively maximize performance, system utilization, and profit by considering virtualization technologies, heterogeneous resources, context awareness, and other features. In such complex scenario, security and dependability are vital concerns that need to be considered in future computing and networking systems in order to provide the future advanced services, such as mission-critical applications. This paper provides a comprehensive survey of existing literature that considers security and dependability for resource allocation in computing and networking systems. The current research works are categorized by considering the allocated type of resources for different technologies, scenarios, issues, attributes, and solutions. The paper presents the research works on resource allocation that includes security and dependability, both singularly and jointly. The future research directions on resource allocation are also discussed. The paper shows how there are only a few works that, even singularly, consider security and dependability in resource allocation in the future computing and networking systems and highlights the importance of jointly considering security and dependability and the need for intelligent, adaptive and robust solutions. This paper aims to help the researchers effectively consider security and dependability in future networking and computing systems.publishedVersio

    Computing at massive scale: Scalability and dependability challenges

    Get PDF
    Large-scale Cloud systems and big data analytics frameworks are now widely used for practical services and applications. However, with the increase of data volume, together with the heterogeneity of workloads and resources, and the dynamic nature of massive user requests, the uncertainties and complexity of resource management and service provisioning increase dramatically, often resulting in poor resource utilization, vulnerable system dependability, and user-perceived performance degradations. In this paper we report our latest understanding of the current and future challenges in this particular area, and discuss both existing and potential solutions to the problems, especially those concerned with system efficiency, scalability and dependability. We first introduce a data-driven analysis methodology for characterizing the resource and workload patterns and tracing performance bottlenecks in a massive-scale distributed computing environment. We then examine and analyze several fundamental challenges and the solutions we are developing to tackle them, including for example incremental but decentralized resource scheduling, incremental messaging communication, rapid system failover, and request handling parallelism. We integrate these solutions with our data analysis methodology in order to establish an engineering approach that facilitates the optimization, tuning and verification of massive-scale distributed systems. We aim to develop and offer innovative methods and mechanisms for future computing platforms that will provide strong support for new big data and IoE (Internet of Everything) applications

    Cloud service analysis using round-robin algorithm for quality-of-service aware task placement for internet of things services

    Get PDF
    Round-robin (RR) is a process approach to sharing resources that requires each user to get a turn using them in an agreed order in cloud computing. It is suited for time-sharing systems since it automatically reduces the problem of priority inversion, which are low-priority tasks delayed. The time quantum is limited, and only a one-time quantum process is allowed in round-robin scheduling. The objective of this research is to improve the functionality of the current RR method for scheduling actions in the cloud by lowering the average waiting, turnaround, and response time. CloudAnalyst tool was used to enhance the RR technique by changing the parameter value in optimizing the high accuracy and low cost. The result presents the achieved overall min and max response times are 36.69 and 650.30 ms for running 300 min RR. The cost for the virtual machines (VMs) is identified from 0.5to0.5 to 3. The longer the time used, the higher the cost of the data transfer. This research is significant in improving communication and the quality of relationships within groups
    • …
    corecore