3,921 research outputs found
A WOA-based optimization approach for task scheduling in cloud Computing systems
Task scheduling in cloud computing can directly
affect the resource usage and operational cost of a system. To
improve the efficiency of task executions in a cloud, various
metaheuristic algorithms, as well as their variations, have been
proposed to optimize the scheduling. In this work, for the
first time, we apply the latest metaheuristics WOA (the whale
optimization algorithm) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that
basis, we propose an advanced approach called IWC (Improved
WOA for Cloud task scheduling) to further improve the optimal
solution search capability of the WOA-based method. We present
the detailed implementation of IWC and our simulation-based
experiments show that the proposed IWC has better convergence
speed and accuracy in searching for the optimal task scheduling
plans, compared to the current metaheuristic algorithms. Moreover, it can also achieve better performance on system resource
utilization, in the presence of both small and large-scale tasks
Deep Reinforcement Learning-based Scheduling in Edge and Fog Computing Environments
Edge/fog computing, as a distributed computing paradigm, satisfies the
low-latency requirements of ever-increasing number of IoT applications and has
become the mainstream computing paradigm behind IoT applications. However,
because large number of IoT applications require execution on the edge/fog
resources, the servers may be overloaded. Hence, it may disrupt the edge/fog
servers and also negatively affect IoT applications' response time. Moreover,
many IoT applications are composed of dependent components incurring extra
constraints for their execution. Besides, edge/fog computing environments and
IoT applications are inherently dynamic and stochastic. Thus, efficient and
adaptive scheduling of IoT applications in heterogeneous edge/fog computing
environments is of paramount importance. However, limited computational
resources on edge/fog servers imposes an extra burden for applying optimal but
computationally demanding techniques. To overcome these challenges, we propose
a Deep Reinforcement Learning-based IoT application Scheduling algorithm,
called DRLIS to adaptively and efficiently optimize the response time of
heterogeneous IoT applications and balance the load of the edge/fog servers. We
implemented DRLIS as a practical scheduler in the FogBus2 function-as-a-service
framework for creating an edge-fog-cloud integrated serverless computing
environment. Results obtained from extensive experiments show that DRLIS
significantly reduces the execution cost of IoT applications by up to 55%, 37%,
and 50% in terms of load balancing, response time, and weighted cost,
respectively, compared with metaheuristic algorithms and other reinforcement
learning techniques
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
Normalization: A Preprocessing Stage
As we know that the normalization is a pre-processing stage of any type
problem statement. Especially normalization takes important role in the field
of soft computing, cloud computing etc. for manipulation of data like scale
down or scale up the range of data before it becomes used for further stage.
There are so many normalization techniques are there namely Min-Max
normalization, Z-score normalization and Decimal scaling normalization. So by
referring these normalization techniques we are going to propose one new
normalization technique namely, Integer Scaling Normalization. And we are going
to show our proposed normalization technique using various data sets.Comment: 4 pages, 3 figures, 3 table
SLO-aware Colocation of Data Center Tasks Based on Instantaneous Processor Requirements
In a cloud data center, a single physical machine simultaneously executes
dozens of highly heterogeneous tasks. Such colocation results in more efficient
utilization of machines, but, when tasks' requirements exceed available
resources, some of the tasks might be throttled down or preempted. We analyze
version 2.1 of the Google cluster trace that shows short-term (1 second) task
CPU usage. Contrary to the assumptions taken by many theoretical studies, we
demonstrate that the empirical distributions do not follow any single
distribution. However, high percentiles of the total processor usage (summed
over at least 10 tasks) can be reasonably estimated by the Gaussian
distribution. We use this result for a probabilistic fit test, called the
Gaussian Percentile Approximation (GPA), for standard bin-packing algorithms.
To check whether a new task will fit into a machine, GPA checks whether the
resulting distribution's percentile corresponding to the requested service
level objective, SLO is still below the machine's capacity. In our simulation
experiments, GPA resulted in colocations exceeding the machines' capacity with
a frequency similar to the requested SLO.Comment: Author's version of a paper published in ACM SoCC'1
Machine learning based Model for Cloud Load Prediction and Resource Allocation
Elasticity and the lack of upfront capital investment offered by cloud computing is appealing to many businesses. There is a lot of discussion on the benefits and costs of the cloud model and on how to move legacy applications onto the cloud platform. Here we study a different problem: how can a cloud service provider best multiplex its virtual resources onto the physical hardware? This is important because much of the touted gains in the cloud model come from such multiplexing. Studies have found that servers in many existing data centers are often severely under-utilized due to over-provisioning for the peak demand. The cloud model is expected to make such practice unnecessary by offering automatic scale up and down in response to load variation. Besides reducing the hardware cost, it also saves on electricity which contributes to a significant portion of the operational expenses in large data centers.
Proper resource allocation for various virtualized resources must be based on these cloud load predictions. The presence of heterogeneous applications, such as content delivery networks, web applications, and MapReduce tasks, complicates this process. Cloud workloads with conflicting resource allocation needs for communication and information processing further exacerbate the difficulty
Task scheduling model for fog paradigm
Task scheduling in fog paradigm is highly complex and in the literature, there are still few studies. In the cloud architecture, it is widely studied and in many researches, it is approached from the perspective of service providers. Trying to bring innovative contributions in these areas, in this paper, we propose a model to the context-aware task-scheduling problem for fog paradigm. In our proposal, different context parameters are normalized through Min-Max normalization; requisition priorities are defined through the application of the Multiple Linear Regression (MLR) technique and scheduling is performed using Multi-Objective Non-Linear Programming Optimization (MONLIP) technique.The authors are grateful to the Calouste Gulbenkian Foundation for its funding of this research
through the Ph.D. scholarship under the reference No. 234242, 2019 - Postgraduate Scholarships for
students from PALOP and Timor-Leste.info:eu-repo/semantics/publishedVersio
- …