475 research outputs found
Virtual Machine Deployment Strategy Based on Improved PSO in Cloud Computing
Energy consumption is an important cost driven by growth of computing power, thereby energy conservation has become one of the major problems faced by cloud system. How to maximize the utilization of physical machines, reduce the number of virtual machine migrations, and maintain load balance under the constraints of physical machine resource thresholds that is the effective way to implement energy saving in data center. In the paper, we propose a multi-objective physical model for virtual machine deployment. Then the improved multi-objective particle swarm optimization (TPSO) is applied to virtual machine deployment. Compared to other algorithms, the algorithm has better ergodicity into the initial stage, improves the optimization precision and optimization efficiency of the particle swarm. The experimental results based on CloudSim simulation platform show that the algorithm is effective at improving physical machine resource utilization, reducing resource waste, and improving system load balance
Hybridized Darts Game with Beluga Whale Optimization Strategy for Efficient Task Scheduling with Optimal Load Balancing in Cloud Computing
A cloud computing technology permits clients to use hardware and software technology virtually on a subscription basis. The task scheduling process is planned to effectively minimize implementation time and cost while simultaneously increasing resource utilization, and it is one of the most common problems in cloud computing systems. The Nondeterministic Polynomial (NP)-hard optimization problem occurs due to limitations like an insufficient make-span, excessive resource utilization, low implementation costs, and immediate response for scheduling. The task allocation is NP-hard because of the increase in the amount of combinations and computing resources. In this work, a hybrid heuristic optimization technique with load balancing is implemented for optimal task scheduling to increase the performance of service providers in the cloud infrastructure. Thus, the issues that occur in the scheduling process is greatly reduced. The load balancing problem is effectively solved with the help of the proposed task scheduling scheme. The allocation of tasks to the machines based on the workload is done with the help of the proposed Hybridized Darts Game-Based Beluga Whale Optimization Algorithm (HDG-BWOA). The objective functions like higher Cloud Data Center (CDC) resource consumption, increased task assurance ratio, minimized mean reaction time, and reduced energy utilization are considered while allocating the tasks to the virtual machines. This task scheduling approach ensures flexibility among virtual machines, preventing them from overloading or underloading. Also, using this technique, more tasks is efficiently completed within the deadline. The efficacy of the offered arrangement is ensured with the conventional heuristic-based task scheduling approaches in accordance with various evaluation measures
A review on job scheduling technique in cloud computing and priority rule based intelligent framework
In recent years, the concept of cloud computing has been gaining traction to provide dynamically increasing access to shared computing resources (software and hardware) via the internet. It’s not secret that cloud computing’s ability to supply mission-critical services has made job scheduling a hot subject in the industry right now. Cloud resources may be wasted, or in-service performance may suffer because
of under-utilization or over-utilization, respectively, due to poor scheduling. Various strategies from the literature are examined in this research in order to give procedures for the planning and performance of Job Scheduling techniques (JST) in cloud computing. To begin, we look at and tabulate the existing JST
that is linked to cloud and grid computing. The present successes are then thoroughly reviewed, difficulties and flows are recognized, and intelligent solutions are devised to take advantage of the proposed taxonomy. To bridge the gaps between present investigations, this paper also seeks to provide readers with a conceptual framework, where we proposed an effective job scheduling technique in cloud computing. These findings are intended to provide academics and policymakers with information about the advantages of a more efficient cloud computing setup. In cloud computing, fair job scheduling is most important. We proposed a priority-based scheduling technique to ensure fair job scheduling. Finally, the open research questions raised in this article will create a path for the implementation of an effective job
scheduling strateg
Deep Reinforcement Learning-based Scheduling in Edge and Fog Computing Environments
Edge/fog computing, as a distributed computing paradigm, satisfies the
low-latency requirements of ever-increasing number of IoT applications and has
become the mainstream computing paradigm behind IoT applications. However,
because large number of IoT applications require execution on the edge/fog
resources, the servers may be overloaded. Hence, it may disrupt the edge/fog
servers and also negatively affect IoT applications' response time. Moreover,
many IoT applications are composed of dependent components incurring extra
constraints for their execution. Besides, edge/fog computing environments and
IoT applications are inherently dynamic and stochastic. Thus, efficient and
adaptive scheduling of IoT applications in heterogeneous edge/fog computing
environments is of paramount importance. However, limited computational
resources on edge/fog servers imposes an extra burden for applying optimal but
computationally demanding techniques. To overcome these challenges, we propose
a Deep Reinforcement Learning-based IoT application Scheduling algorithm,
called DRLIS to adaptively and efficiently optimize the response time of
heterogeneous IoT applications and balance the load of the edge/fog servers. We
implemented DRLIS as a practical scheduler in the FogBus2 function-as-a-service
framework for creating an edge-fog-cloud integrated serverless computing
environment. Results obtained from extensive experiments show that DRLIS
significantly reduces the execution cost of IoT applications by up to 55%, 37%,
and 50% in terms of load balancing, response time, and weighted cost,
respectively, compared with metaheuristic algorithms and other reinforcement
learning techniques
Distributed evolutionary algorithms and their models: A survey of the state-of-the-art
The increasing complexity of real-world optimization problems raises new challenges to evolutionary computation. Responding to these challenges, distributed evolutionary computation has received considerable attention over the past decade. This article provides a comprehensive survey of the state-of-the-art distributed evolutionary algorithms and models, which have been classified into two groups according to their task division mechanism. Population-distributed models are presented with master-slave, island, cellular, hierarchical, and pool architectures, which parallelize an evolution task at population, individual, or operation levels. Dimension-distributed models include coevolution and multi-agent models, which focus on dimension reduction. Insights into the models, such as synchronization, homogeneity, communication, topology, speedup, advantages and disadvantages are also presented and discussed. The study of these models helps guide future development of different and/or improved algorithms. Also highlighted are recent hotspots in this area, including the cloud and MapReduce-based implementations, GPU and CUDA-based implementations, distributed evolutionary multiobjective optimization, and real-world applications. Further, a number of future research directions have been discussed, with a conclusion that the development of distributed evolutionary computation will continue to flourish
Differential evolution with an evolution path: a DEEP evolutionary algorithm
Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs
Recommended from our members
Effects of Particle Swarm Optimisation on a Hybrid Load Balancing Approach for Resource Optimisation in Internet of Things
This article belongs to the Special Issue Emerging Machine Learning Techniques in Industrial Internet of ThingsCopyright © 2023 by the authors. The internet of things, a collection of diversified distributed nodes, implies a varying choice of activities ranging from sleep monitoring and tracking of activities, to more complex activities such as data analytics and management. With an increase in scale comes even greater complexities, leading to significant challenges such as excess energy dissipation, which can lead to a decrease in IoT devices’ lifespan. Internet of things’ (IoT) multiple variable activities and ample data management greatly influence devices’ lifespan, making resource optimisation a necessity. Existing methods with respect to aspects of resource management and optimisation are limited in their concern of devices energy dissipation. This paper therefore proposes a decentralised approach, which contains an amalgamation of efficient clustering techniques, edge computing paradigms, and a hybrid algorithm, targeted at curbing resource optimisation problems and life span issues associated with IoT devices. The decentralised topology aimed at the resource optimisation of IoT places equal importance on resource allocation and resource scheduling, as opposed to existing methods, by incorporating aspects of the static (round robin), dynamic (resource-based), and clustering (particle swarm optimisation) algorithms, to provide a solid foundation for an optimised and secure IoT. The simulation constructs five test-case scenarios and uses performance indicators to evaluate the effects the proposed model has on resource optimisation in IoT. The simulation results indicate the superiority of the PSOR2B to the ant colony, the current centralised optimisation approach, LEACH, and C-LBCA.This research received no external funding
A hybrid multi-objective evolutionary algorithm-based semantic foundation for sustainable distributed manufacturing systems
Rising energy prices, increasing maintenance costs, and strict environmental regimes have augmented the already existing pressure on the contemporary manufacturing environment. Although the decentralization of supply chain has led to rapid advancements in manufacturing systems, finding an efficient supplier simultaneously from the pool of available ones as per customer requirement and enhancing the process planning and scheduling functions are the predominant approaches still needed to be addressed. Therefore, this paper aims to address this issue by considering a set of gear manufacturing industries located across India as a case study. An integrated classifier-assisted evolutionary multi-objective evolutionary approach is proposed for solving the objectives of makespan, energy consumption, and increased service utilization rate, interoperability, and reliability. To execute the approach initially, text-mining-based supervised machine-learning models, namely Decision Tree, NaĂŻve Bayes, Random Forest, and Support Vector Machines (SVM) were adopted for the classification of suppliers into task-specific suppliers. Following this, with the identified suppliers as input, the problem was formulated as a multi-objective Mixed-Integer Linear Programming (MILP) model. We then proposed a Hybrid Multi-Objective Moth Flame Optimization algorithm (HMFO) to optimize process planning and scheduling functions. Numerical experiments have been carried out with the formulated problem for 10 different instances, along with a comparison of the results with a Non-Dominated Sorting Genetic Algorithm (NSGA-II) to illustrate the feasibility of the approach.The project is funded by Department of Science and Technology, Science and Engineering
Research Board (DST-SERB), Statutory Body Established through an Act of Parliament: SERB Act
2008, Government of India with Sanction Order No ECR/2016/001808, and also by FCT–Portuguese
Foundation for Science and Technology within the R&D Units Projects Scopes: UIDB/00319/2020,
UIDP/04077/2020, and UIDB/04077/2020
Research Trends and Outlooks in Assembly Line Balancing Problems
This paper presents the findings from the survey of articles published on the assembly line balancing problems (ALBPs) during 2014-2018. Before proceeding a comprehensive literature review, the ineffectiveness of the previous ALBP classification structures is discussed and a new classification scheme based on the layout configurations of assembly lines is subsequently proposed. The research trend in each layout of assembly lines is highlighted through the graphical presentations. The challenges in the ALBPs are also pinpointed as a technical guideline for future research works
Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and Research Opportunities
Evolutionary algorithms (EA), a class of stochastic search methods based on
the principles of natural evolution, have received widespread acclaim for their
exceptional performance in various real-world optimization problems. While
researchers worldwide have proposed a wide variety of EAs, certain limitations
remain, such as slow convergence speed and poor generalization capabilities.
Consequently, numerous scholars actively explore improvements to algorithmic
structures, operators, search patterns, etc., to enhance their optimization
performance. Reinforcement learning (RL) integrated as a component in the EA
framework has demonstrated superior performance in recent years. This paper
presents a comprehensive survey on integrating reinforcement learning into the
evolutionary algorithm, referred to as reinforcement learning-assisted
evolutionary algorithm (RL-EA). We begin with the conceptual outlines of
reinforcement learning and the evolutionary algorithm. We then provide a
taxonomy of RL-EA. Subsequently, we discuss the RL-EA integration method, the
RL-assisted strategy adopted by RL-EA, and its applications according to the
existing literature. The RL-assisted procedure is divided according to the
implemented functions including solution generation, learnable objective
function, algorithm/operator/sub-population selection, parameter adaptation,
and other strategies. Finally, we analyze potential directions for future
research. This survey serves as a rich resource for researchers interested in
RL-EA as it overviews the current state-of-the-art and highlights the
associated challenges. By leveraging this survey, readers can swiftly gain
insights into RL-EA to develop efficient algorithms, thereby fostering further
advancements in this emerging field.Comment: 26 pages, 16 figure
- …