168 research outputs found
Cost-aware real-time divisible loads scheduling in cloud computing
Cloud computing has become an important alternative for solving large scale resource intensive problems in science, engineering, and analytics. Resource management play an important role in improving the quality of service (QoS). This paper is concerned with the investigation of scheduling strategies for divisible loads with deadlines constraints upon heterogeneous processors in a cloud computing environment. The workload allocation approach presents in this paper is using Divisible Load Theory (DLT). It is based on the fact that the computation can be partitioned into some arbitrary sizes and each partition can be processed independently of each other. Through series of simulation against the baseline strategies, it can be found that the worker selection order in the service pool and the amount of fraction load assigned to each of them have significant effects on the total computation cost.Keywords: Cloud computing, Divisible Load Theory (DLT), Cost, Quality-of-service (QoS
Employing the Powered Hybridized Darts Game with BWO Optimization for Effective Job Scheduling and Distributing Load in the Cloud-Based Environment
One of the most frequent issues in cloud computing systems is job scheduling, which is designed to efficiently reduce installation time and cost while concurrently enhancing resource utilisation. Limitations such as accessible implementation costs, high resource utilisation, insufficient make-span, and fast scheduling response lead to the Nondeterministic Polynomial (NP)-hard optimisation problem. As the number of combinations along with processing power increases, job allocation becomes NP-hard. This study employs a hybrid heuristic optimisation technique that incorporates load balancing to achieve optimal job scheduling and boost service provider performance within the cloud architecture. As a result, there are many less problems with the scheduling process. The suggested work scheduling approach successfully resolves the load balancing issue. The suggested Hybridised Darts Game-Based Beluga Whale Optimisation Algorithm (HDG-BWOA) assists in assigning jobs to the machines according to workload. When assigning jobs to virtual machines, factors such as reduced energy usage, minimised mean reaction time, enhanced job assurance ratio, and higher Cloud Data Centre (CDC) resource consumption are taken into account. By ensuring flexibility among virtual computers, this job scheduling strategy keeps them from overloading or underloading. Additionally, by employing this method, more activities are effectively finished before the deadline. The effectiveness of the proposed configuration is guaranteed using traditional heuristic-based job scheduling techniques in compliance with multiple assessment metrics
A smart resource management mechanism with trust access control for cloud computing environment
The core of the computer business now offers subscription-based on-demand
services with the help of cloud computing. We may now share resources among
multiple users by using virtualization, which creates a virtual instance of a
computer system running in an abstracted hardware layer. It provides infinite
computing capabilities through its massive cloud datacenters, in contrast to
early distributed computing models, and has been incredibly popular in recent
years because to its continually growing infrastructure, user base, and hosted
data volume. This article suggests a conceptual framework for a workload
management paradigm in cloud settings that is both safe and
performance-efficient. A resource management unit is used in this paradigm for
energy and performing virtual machine allocation with efficiency, assuring the
safe execution of users' applications, and protecting against data breaches
brought on by unauthorised virtual machine access real-time. A secure virtual
machine management unit controls the resource management unit and is created to
produce data on unlawful access or intercommunication. Additionally, a workload
analyzer unit works simultaneously to estimate resource consumption data to
help the resource management unit be more effective during virtual machine
allocation. The suggested model functions differently to effectively serve the
same objective, including data encryption and decryption prior to transfer,
usage of trust access mechanism to prevent unauthorised access to virtual
machines, which creates extra computational cost overhead
Cloud computing resource scheduling and a survey of its evolutionary approaches
A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon
Joint Computation Offloading and Prioritized Scheduling in Mobile Edge Computing
With the rapid development of smart phones, enormous amounts of data are generated and usually require intensive and real-time computation. Nevertheless, quality of service (QoS) is hardly to be met due to the tension between resourcelimited (battery, CPU power) devices and computation-intensive applications. Mobileedge computing (MEC) emerging as a promising technique can be used to copy with stringent requirements from mobile applications. By offloading computationally intensive workloads to edge server and applying efficient task scheduling, energy cost of mobiles could be significantly reduced and therefore greatly improve QoS, e.g., latency. This paper proposes a joint computation offloading and prioritized task scheduling scheme in a multi-user mobile-edge computing system. We investigate an energy minimizing task offloading strategy in mobile devices and develop an effective priority-based task scheduling algorithm with edge server. The execution time, energy consumption, execution cost, and bonus score against both the task data sizes and latency requirement is adopted as the performance metric. Performance evaluation results show that, the proposed algorithm significantly reduce task completion time, edge server VM usage cost, and improve QoS in terms of bonus score. Moreover, dynamic prioritized task scheduling is also discussed herein, results show dynamic thresholds setting realizes the optimal task scheduling. We believe that this work is significant to the emerging mobile-edge computing paradigm, and can be applied to other Internet of Things (IoT)-Edge applications
A Systematic Literature Review on Task Allocation and Performance Management Techniques in Cloud Data Center
As cloud computing usage grows, cloud data centers play an increasingly
important role. To maximize resource utilization, ensure service quality, and
enhance system performance, it is crucial to allocate tasks and manage
performance effectively. The purpose of this study is to provide an extensive
analysis of task allocation and performance management techniques employed in
cloud data centers. The aim is to systematically categorize and organize
previous research by identifying the cloud computing methodologies, categories,
and gaps. A literature review was conducted, which included the analysis of 463
task allocations and 480 performance management papers. The review revealed
three task allocation research topics and seven performance management methods.
Task allocation research areas are resource allocation, load-Balancing, and
scheduling. Performance management includes monitoring and control, power and
energy management, resource utilization optimization, quality of service
management, fault management, virtual machine management, and network
management. The study proposes new techniques to enhance cloud computing work
allocation and performance management. Short-comings in each approach can guide
future research. The research's findings on cloud data center task allocation
and performance management can assist academics, practitioners, and cloud
service providers in optimizing their systems for dependability,
cost-effectiveness, and scalability. Innovative methodologies can steer future
research to fill gaps in the literature
- …