6,442 research outputs found
Engineering a QoS Provider Mechanism for Edge Computing with Deep Reinforcement Learning
With the development of new system solutions that integrate traditional cloud
computing with the edge/fog computing paradigm, dynamic optimization of service
execution has become a challenge due to the edge computing resources being more
distributed and dynamic. How to optimize the execution to provide Quality of
Service (QoS) in edge computing depends on both the system architecture and the
resource allocation algorithms in place. We design and develop a QoS provider
mechanism, as an integral component of a fog-to-cloud system, to work in
dynamic scenarios by using deep reinforcement learning. We choose reinforcement
learning since it is particularly well suited for solving problems in dynamic
and adaptive environments where the decision process needs to be frequently
updated. We specifically use a Deep Q-learning algorithm that optimizes QoS by
identifying and blocking devices that potentially cause service disruption due
to dynamicity. We compare the reinforcement learning based solution with
state-of-the-art heuristics that use telemetry data, and analyze pros and cons
Learning Scheduling Algorithms for Data Processing Clusters
Efficiently scheduling data processing jobs on distributed compute clusters
requires complex algorithms. Current systems, however, use simple generalized
heuristics and ignore workload characteristics, since developing and tuning a
scheduling policy for each workload is infeasible. In this paper, we show that
modern machine learning techniques can generate highly-efficient policies
automatically. Decima uses reinforcement learning (RL) and neural networks to
learn workload-specific scheduling algorithms without any human instruction
beyond a high-level objective such as minimizing average job completion time.
Off-the-shelf RL techniques, however, cannot handle the complexity and scale of
the scheduling problem. To build Decima, we had to develop new representations
for jobs' dependency graphs, design scalable RL models, and invent RL training
methods for dealing with continuous stochastic job arrivals. Our prototype
integration with Spark on a 25-node cluster shows that Decima improves the
average job completion time over hand-tuned scheduling heuristics by at least
21%, achieving up to 2x improvement during periods of high cluster load
Deep Reinforcement Learning-based Scheduling in Edge and Fog Computing Environments
Edge/fog computing, as a distributed computing paradigm, satisfies the
low-latency requirements of ever-increasing number of IoT applications and has
become the mainstream computing paradigm behind IoT applications. However,
because large number of IoT applications require execution on the edge/fog
resources, the servers may be overloaded. Hence, it may disrupt the edge/fog
servers and also negatively affect IoT applications' response time. Moreover,
many IoT applications are composed of dependent components incurring extra
constraints for their execution. Besides, edge/fog computing environments and
IoT applications are inherently dynamic and stochastic. Thus, efficient and
adaptive scheduling of IoT applications in heterogeneous edge/fog computing
environments is of paramount importance. However, limited computational
resources on edge/fog servers imposes an extra burden for applying optimal but
computationally demanding techniques. To overcome these challenges, we propose
a Deep Reinforcement Learning-based IoT application Scheduling algorithm,
called DRLIS to adaptively and efficiently optimize the response time of
heterogeneous IoT applications and balance the load of the edge/fog servers. We
implemented DRLIS as a practical scheduler in the FogBus2 function-as-a-service
framework for creating an edge-fog-cloud integrated serverless computing
environment. Results obtained from extensive experiments show that DRLIS
significantly reduces the execution cost of IoT applications by up to 55%, 37%,
and 50% in terms of load balancing, response time, and weighted cost,
respectively, compared with metaheuristic algorithms and other reinforcement
learning techniques
A Review on Energy Consumption Optimization Techniques in IoT Based Smart Building Environments
In recent years, due to the unnecessary wastage of electrical energy in
residential buildings, the requirement of energy optimization and user comfort
has gained vital importance. In the literature, various techniques have been
proposed addressing the energy optimization problem. The goal of each technique
was to maintain a balance between user comfort and energy requirements such
that the user can achieve the desired comfort level with the minimum amount of
energy consumption. Researchers have addressed the issue with the help of
different optimization algorithms and variations in the parameters to reduce
energy consumption. To the best of our knowledge, this problem is not solved
yet due to its challenging nature. The gap in the literature is due to the
advancements in the technology and drawbacks of the optimization algorithms and
the introduction of different new optimization algorithms. Further, many newly
proposed optimization algorithms which have produced better accuracy on the
benchmark instances but have not been applied yet for the optimization of
energy consumption in smart homes. In this paper, we have carried out a
detailed literature review of the techniques used for the optimization of
energy consumption and scheduling in smart homes. The detailed discussion has
been carried out on different factors contributing towards thermal comfort,
visual comfort, and air quality comfort. We have also reviewed the fog and edge
computing techniques used in smart homes
- …