6,696 research outputs found
Engineering a QoS Provider Mechanism for Edge Computing with Deep Reinforcement Learning
With the development of new system solutions that integrate traditional cloud
computing with the edge/fog computing paradigm, dynamic optimization of service
execution has become a challenge due to the edge computing resources being more
distributed and dynamic. How to optimize the execution to provide Quality of
Service (QoS) in edge computing depends on both the system architecture and the
resource allocation algorithms in place. We design and develop a QoS provider
mechanism, as an integral component of a fog-to-cloud system, to work in
dynamic scenarios by using deep reinforcement learning. We choose reinforcement
learning since it is particularly well suited for solving problems in dynamic
and adaptive environments where the decision process needs to be frequently
updated. We specifically use a Deep Q-learning algorithm that optimizes QoS by
identifying and blocking devices that potentially cause service disruption due
to dynamicity. We compare the reinforcement learning based solution with
state-of-the-art heuristics that use telemetry data, and analyze pros and cons
Learning Scheduling Algorithms for Data Processing Clusters
Efficiently scheduling data processing jobs on distributed compute clusters
requires complex algorithms. Current systems, however, use simple generalized
heuristics and ignore workload characteristics, since developing and tuning a
scheduling policy for each workload is infeasible. In this paper, we show that
modern machine learning techniques can generate highly-efficient policies
automatically. Decima uses reinforcement learning (RL) and neural networks to
learn workload-specific scheduling algorithms without any human instruction
beyond a high-level objective such as minimizing average job completion time.
Off-the-shelf RL techniques, however, cannot handle the complexity and scale of
the scheduling problem. To build Decima, we had to develop new representations
for jobs' dependency graphs, design scalable RL models, and invent RL training
methods for dealing with continuous stochastic job arrivals. Our prototype
integration with Spark on a 25-node cluster shows that Decima improves the
average job completion time over hand-tuned scheduling heuristics by at least
21%, achieving up to 2x improvement during periods of high cluster load
Enhancing Dynamic Production Scheduling And Resource Allocation Through Adaptive Control Systems With Deep Reinforcement Learning
Traditional production scheduling and resource allocation methods often struggle to adapt to changing conditions in manufacturing environments. To address this challenge, this study leverages an adaptive control system integrated with a Deep Deterministic Policy Gradient (DDPG) alongside a particle swarm optimization algorithm to enable real-time production scheduling and allocation of resources. The system continuously learns from generated production data and adjusts production schedules with resource allocations based on evolving conditions such as demand fluctuations and resource availability. By harnessing the capabilities of Deep Reinforcement learning, the proposed approach of applying the DDPG algorithm to simulate the environment improves production efficiency, minimizes delays, and optimizes resource utilization. Through conducted experiments, the effectiveness of the DDPG-Particle Swarm Optimization technique (DRPO) was demonstrated in enhancing dynamic production scheduling and resource allocation in simulated manufacturing settings. This study presents a significant step towards intelligent, self-improving production control systems that can navigate complex and dynamic manufacturing environments
Deep Reinforcement Learning-based Scheduling in Edge and Fog Computing Environments
Edge/fog computing, as a distributed computing paradigm, satisfies the
low-latency requirements of ever-increasing number of IoT applications and has
become the mainstream computing paradigm behind IoT applications. However,
because large number of IoT applications require execution on the edge/fog
resources, the servers may be overloaded. Hence, it may disrupt the edge/fog
servers and also negatively affect IoT applications' response time. Moreover,
many IoT applications are composed of dependent components incurring extra
constraints for their execution. Besides, edge/fog computing environments and
IoT applications are inherently dynamic and stochastic. Thus, efficient and
adaptive scheduling of IoT applications in heterogeneous edge/fog computing
environments is of paramount importance. However, limited computational
resources on edge/fog servers imposes an extra burden for applying optimal but
computationally demanding techniques. To overcome these challenges, we propose
a Deep Reinforcement Learning-based IoT application Scheduling algorithm,
called DRLIS to adaptively and efficiently optimize the response time of
heterogeneous IoT applications and balance the load of the edge/fog servers. We
implemented DRLIS as a practical scheduler in the FogBus2 function-as-a-service
framework for creating an edge-fog-cloud integrated serverless computing
environment. Results obtained from extensive experiments show that DRLIS
significantly reduces the execution cost of IoT applications by up to 55%, 37%,
and 50% in terms of load balancing, response time, and weighted cost,
respectively, compared with metaheuristic algorithms and other reinforcement
learning techniques
- …