110 research outputs found
Deep Meta Q-Learning based Multi-Task Offloading in Edge-Cloud Systems
Resource-Constrained Edge Devices Can Not Efficiently Handle the Explosive Growth of Mobile Data and the Increasing Computational Demand of Modern-Day User Applications. Task Offloading Allows the Migration of Complex Tasks from User Devices to the Remote Edge-Cloud Servers Thereby Reducing their Computational Burden and Energy Consumption While Also Improving the Efficiency of Task Processing. However, Obtaining the Optimal Offloading Strategy in a Multi-Task Offloading Decision-Making Process is an NP-Hard Problem. Existing Deep Learning Techniques with Slow Learning Rates and Weak Adaptability Are Not Suitable for Dynamic Multi-User Scenarios. in This Article, We Propose a Novel Deep Meta-Reinforcement Learning-Based Approach to the Multi-Task Offloading Problem using a Combination of First-Order Meta-Learning and Deep Q-Learning Methods. We Establish the Meta-Generalization Bounds for the Proposed Algorithm and Demonstrate that It Can Reduce the Time and Energy Consumption of IoT Applications by Up to 15%. through Rigorous Simulations, We Show that Our Method Achieves Near-Optimal Offloading Solutions While Also Being Able to Adapt to Dynamic Edge-Cloud Environments
Microservices-based IoT Applications Scheduling in Edge and Fog Computing: A Taxonomy and Future Directions
Edge and Fog computing paradigms utilise distributed, heterogeneous and
resource-constrained devices at the edge of the network for efficient
deployment of latency-critical and bandwidth-hungry IoT application services.
Moreover, MicroService Architecture (MSA) is increasingly adopted to keep up
with the rapid development and deployment needs of the fast-evolving IoT
applications. Due to the fine-grained modularity of the microservices along
with their independently deployable and scalable nature, MSA exhibits great
potential in harnessing both Fog and Cloud resources to meet diverse QoS
requirements of the IoT application services, thus giving rise to novel
paradigms like Osmotic computing. However, efficient and scalable scheduling
algorithms are required to utilise the said characteristics of the MSA while
overcoming novel challenges introduced by the architecture. To this end, we
present a comprehensive taxonomy of recent literature on microservices-based
IoT applications scheduling in Edge and Fog computing environments.
Furthermore, we organise multiple taxonomies to capture the main aspects of the
scheduling problem, analyse and classify related works, identify research gaps
within each category, and discuss future research directions.Comment: 35 pages, 10 figures, submitted to ACM Computing Survey
Application of Machine Learning Optimization in Cloud Computing Resource Scheduling and Management
In recent years, cloud computing has been widely used. Cloud computing refers
to the centralized computing resources, users through the access to the
centralized resources to complete the calculation, the cloud computing center
will return the results of the program processing to the user. Cloud computing
is not only for individual users, but also for enterprise users. By purchasing
a cloud server, users do not have to buy a large number of computers, saving
computing costs. According to a report by China Economic News Network, the
scale of cloud computing in China has reached 209.1 billion yuan. At present,
the more mature cloud service providers in China are Ali Cloud, Baidu Cloud,
Huawei Cloud and so on. Therefore, this paper proposes an innovative approach
to solve complex problems in cloud computing resource scheduling and management
using machine learning optimization techniques. Through in-depth study of
challenges such as low resource utilization and unbalanced load in the cloud
environment, this study proposes a comprehensive solution, including
optimization methods such as deep learning and genetic algorithm, to improve
system performance and efficiency, and thus bring new breakthroughs and
progress in the field of cloud computing resource management.Rational
allocation of resources plays a crucial role in cloud computing. In the
resource allocation of cloud computing, the cloud computing center has limited
cloud resources, and users arrive in sequence. Each user requests the cloud
computing center to use a certain number of cloud resources at a specific time
An Improved Scheduling with Advantage Actor-Critic for Storm Workloads
Various resources as the essential elements of data centers, and the
completion time is vital to users. In terms of the persistence, the periodicity
and the spatial-temporal dependence of stream workload, a new Storm scheduler
with Advantage Actor-Critic is proposed to improve resource utilization for
minimizing the completion time. A new weighted embedding with a Graph Neural
Network is designed to depend on the features of a job comprehensively, which
includes the dependence, the types and the positions of tasks in a job. An
improved Advantage Actor-Critic integrating task chosen and executor assignment
is proposed to schedule tasks to executors in order to better resource
utilization. Then the status of tasks and executors are updated for the next
scheduling. Compared to existing methods, experimental results show that the
proposed Storm scheduler improves resource utilization. The completion time is
reduced by almost 17\% on the TPC-H data set and reduced by almost 25\% on the
Alibaba data set
DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC) Resource Scheduling
In this paper, we~present a novel scheduling solution for a class of
System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA,
GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical
jobs with their tasks represented by a directed acyclic graph. Traditionally,
heuristic algorithms have been widely used for many resource scheduling
domains, and Heterogeneous Earliest Finish Time (HEFT) has been a dominating
state-of-the-art technique across a broad range of heterogeneous resource
scheduling domains over many years. Despite their long-standing popularity,
HEFT-like algorithms are known to be vulnerable to a small amount of noise
added to the environment. Our Deep Reinforcement Learning (DRL)-based SoC
Scheduler (DeepSoCS), capable of learning the "best" task ordering under
dynamic environment changes, overcomes the brittleness of rule-based schedulers
such as HEFT with significantly higher performance across different types of
jobs. We~describe a DeepSoCS design process using a real-time heterogeneous SoC
scheduling emulator, discuss major challenges, and present two novel neural
network design features that lead to outperforming HEFT: (i) hierarchical job-
and task-graph embedding; and (ii) efficient use of real-time task information
in the state space. Furthermore, we~introduce effective techniques to address
two fundamental challenges present in our environment: delayed consequences and
joint actions. Through an extensive simulation study, we~show that our DeepSoCS
exhibits the significantly higher performance of job execution time than that
of HEFT with a higher level of robustness under realistic noise conditions.
We~conclude with a discussion of the potential improvements for our DeepSoCS
neural scheduler.Comment: 18 pages, Accepted by Electronics 202
Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment
Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022
Constructing Reliable Computing Environments on Top of Amazon EC2 Spot Instances
Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, is that the delay in acquiring spots can be incredible high. Moreover, SIs may not always be available as they can be reclaimed by EC2 at any given time, with a two-minute interruption notice. In this paper, we propose a multi-workflow scheduling algorithm, allied with a container migration-based mechanism, to dynamically construct and readjust virtual clusters on top of non-reserved EC2 pricing model instances. Our solution leverages recent findings on performance and behavior characteristics of EC2 spots. We conducted simulations by submitting real-life workflow applications, constrained by user-defined deadline and budget quality of service (QoS) parameters. The results indicate that our solution improves the rate of completed tasks by almost 20%, and the rate of completed workflows by at least 30%, compared with other state-of-the-art algorithms, for a worse-case scenarioinfo:eu-repo/semantics/publishedVersio
- …