311 research outputs found

    Virtual machine placement in cloud using artificial bee colony and imperialist competitive algorithm

    Get PDF
    Increasing resource efficiency and reducing energy consumption are significant challenges in cloud environments. Placing virtual machines is essential in improving cloud systems’ performance. This paper presents a hybrid method using the artificial bee colony and imperialist competitive algorithm to reduce provider costs and decrease client expenditure. Implementation of the proposed plan in the CloudSim simulation environment indicates the proposed method performs better than the Monarch butterfly optimization and salp swarm algorithms regarding energy consumption and resource usage. Moreover, average central processing unit (CPU) and random-access memory (RAM) usage and the number of host shutdowns show better results for the proposed model

    Multi-objective Optimization of Orbit Transfer Trajectory Using Imperialist Competitive Algorithm

    Get PDF
    This paper proposes a systematic direct approach to carry out effective multi-objective optimization of space orbit transfer with high-level thrust acceleration. The objective is to provide a transfer trajectory with acceptable accuracy in all orbital parameters while minimizing spacecraft fuel consumption. With direct control parameterization, in which the steering angles of thrust vector are interpolated through a finite number of nodes, the optimal control problem is converted into the parameter optimization problem to be solved by nonlinear programming. Besides the thrust vector direction angles, the thrust magnitude is also considered as variable and unknown along with initial conditions. Since the deviation of thrust vector in spacecraft is limited in reality, mathematical modeling of thrust vector direction is carried out in order to satisfy constraints in maximum deviation of thrust vector direction angles. In this modeling, the polynomial function of each steering angle is defined by interpolation of a curve based on finite number of points in a specific range with a nominal center point with uniform distribution. This kind of definition involves additional parameters to the optimization problem which results the capability of search method in satisfying constraint on the variation of thrust direction angles. Thrust profile is also modeled based on polynomial functions of time with respect to solid and liquid propellant rockets. Imperialist competitive algorithm is used in order to find optimal coefficients of polynomial for thrust vector and the optimal initial states within the transfer. Results are mainly affected by the degree of polynomials involved in mathematical modeling of steering angles and thrust profile which results different optimal initial states where the transfer begins. It is shown that the proposed method is fairly beneficial in the viewpoint of optimality and convergence. The optimality of the technique is shown by comparing the finite thrust optimization with the impulsive analysis. Comparison shows that the accuracy is acceptable with respect to fair precision in orbital elements and minimum fuel mass. Also, the convergence of the optimization algorithm is investigated by comparing the solution of the problem with other optimization techniques such as Genetic Algorithm. Results confirms the practicality of Imperialist Competitive Algorithm in finding optimum variation of thrust vector which results best transfer accuracy along with minimizing fuel consumption

    Collaborative gold mining algorithm : an optimization algorithm based on the natural gold mining process

    Get PDF
    In optimization algorithms, there are some challenges, including lack of optimal solution, slow convergence, lack of scalability, partial search space, and high computational demand. Inspired by the process of gold exploration and exploitation, we propose a new meta-heuristic and stochastic optimization algorithm called collaborative gold mining (CGM). The proposed algorithm has several iterations; in each of these, the center of mass of points with the highest amount of gold is calculated for each miner (agent), with this process continuing until the point with the highest amount of gold or when the optimal solution is found. In an n-dimensional geographic space, the CGM algorithm can locate the best position with the highest amount of gold in the entire search space by collaborating with several gold miners. The proposed CGM algorithm was applied to solve several continuous mathematical functions and several practical problems, namely, the optimal placement of resources, the traveling salesman problem, and bag-of-tasks scheduling. In order to evaluate its efficiency, the CGM results were compared with the outputs of some famous optimization algorithms, such as the genetic algorithm, simulated annealing, particle swarm optimization, and invasive weed optimization. In addition to determining the optimal solutions for all the evaluated problems, the experimental results show that the CGM mechanism has an acceptable performance in terms of optimal solution, convergence, scalability, search space, and computational demand for solving continuous and discrete problems

    Distributed and Lightweight Meta-heuristic Optimization method for Complex Problems

    Get PDF
    The world is becoming more prominent and more complex every day. The resources are limited and efficiently use them is one of the most requirement. Finding an Efficient and optimal solution in complex problems needs to practical methods. During the last decades, several optimization approaches have been presented that they can apply to different optimization problems, and they can achieve different performance on various problems. Different parameters can have a significant effect on the results, such as the type of search spaces. Between the main categories of optimization methods (deterministic and stochastic methods), stochastic optimization methods work more efficient on big complex problems than deterministic methods. But in highly complex problems, stochastic optimization methods also have some issues, such as execution time, convergence to local optimum, incompatible with distributed systems, and dependence on the type of search spaces. Therefore this thesis presents a distributed and lightweight metaheuristic optimization method (MICGA) for complex problems focusing on four main tracks. 1) The primary goal is to improve the execution time by MICGA. 2) The proposed method increases the stability and reliability of the results by using the multi-population strategy in the second track. 3) MICGA is compatible with distributed systems. 4) Finally, MICGA is applied to the different type of optimization problems with other kinds of search spaces (continuous, discrete and order based optimization problems). MICGA has been compared with other efficient optimization approaches. The results show the proposed work has been achieved enough improvement on the main issues of the stochastic methods that are mentioned before.Maailmasta on päivä päivältä tulossa yhä monimutkaisempi. Resurssit ovat rajalliset, ja siksi niiden tehokas käyttö on erittäin tärkeää. Tehokkaan ja optimaalisen ratkaisun löytäminen monimutkaisiin ongelmiin vaatii tehokkaita käytännön menetelmiä. Viime vuosikymmenien aikana on ehdotettu useita optimointimenetelmiä, joilla jokaisella on vahvuutensa ja heikkoutensa suorituskyvyn ja tarkkuuden suhteen erityyppisten ongelmien ratkaisemisessa. Parametreilla, kuten hakuavaruuden tyypillä, voi olla merkittävä vaikutus tuloksiin. Optimointimenetelmien pääryhmistä (deterministiset ja stokastiset menetelmät) stokastinen optimointi toimii suurissa monimutkaisissa ongelmissa tehokkaammin kuin deterministinen optimointi. Erittäin monimutkaisissa ongelmissa stokastisilla optimointimenetelmillä on kuitenkin myös joitain ongelmia, kuten korkeat suoritusajat, päätyminen paikallisiin optimipisteisiin, yhteensopimattomuus hajautetun toteutuksen kanssa ja riippuvuus hakuavaruuden tyypistä. Tämä opinnäytetyö esittelee hajautetun ja kevyen metaheuristisen optimointimenetelmän (MICGA) monimutkaisille ongelmille keskittyen neljään päätavoitteeseen: 1) Ensisijaisena tavoitteena on pienentää suoritusaikaa MICGA:n avulla. 2) Lisäksi ehdotettu menetelmä lisää tulosten vakautta ja luotettavuutta käyttämällä monipopulaatiostrategiaa. 3) MICGA tukee hajautettua toteutusta. 4) Lopuksi MICGA-menetelmää sovelletaan erilaisiin optimointiongelmiin, jotka edustavat erityyppisiä hakuavaruuksia (jatkuvat, diskreetit ja järjestykseen perustuvat optimointiongelmat). Työssä MICGA-menetelmää verrataan muihin tehokkaisiin optimointimenetelmiin. Tulokset osoittavat, että ehdotetulla menetelmällä saavutetaan selkeitä parannuksia yllä mainittuihin stokastisten menetelmien pääongelmiin liittyen

    Multi-objective ACO resource consolidation in cloud computing environment

    Get PDF
    Cloud computing systems provide services to users based on a pay-as-you-go model. High volume of interest and a number of requests by user in cloud computing has resulted in the creation of data centers with large amounts of physical machines. These data centers consume huge amounts of electrical energy and air emissions. In order to improve Datacenter efficiency, resource consolidation using virtualization technology is becoming important for the reduction of the environmental impact caused by the data centers (e.g. electricity usage and carbon dioxide). By using Virtualization technology multiple VM (logical slices that conceptually called VMs) instances can be initialised on a physical machine. As a result, the amounts of active hardware are reduced and the utilisations of physical resources are increased. The present thesis focuses on problem of virtual machine placement and virtual machine consolidation in cloud computing environment. VM placement is a process of mapping virtual machines (Beloglazov and Buyya) to physical machines (PMs). VM consolidation reallocates and optimizes the mapping of VMs and PMs based on migration technique. The goal is to minimize energy consumption, resource wastage and energy communication cost between network elements within a data center under QoS constraints through VM placement and VM consolidation algorithms. The multi objective algorithms are proposed to control trade-off between energy, performance and quality of services. The algorithms have been analyzed with other approaches using Cloudsim tools. The results demonstrate that the proposed algorithms can seek and find solutions that exhibit balance between different objectives. Our main contributions are the proposal of a multi- objective optimization placement approach in order to minimize the total energy consumption of a data center, resource wastage and energy communication cost. Another contribution is to propose a multiobjective consolidation approach in order to minimize the total energy consumption of a data center, minimize number of migrations, minimize number of PMs and reconfigure resources to satisfy the SLA. Also the results have been compared with other single-objective and multi-objective algorithms

    An improved dynamic load balancing for virtualmachines in cloud computing using hybrid bat and bee colony algorithms

    Get PDF
    Cloud technology is a utility where different hardware and software resources are accessed on pay-per-user ground base. Most of these resources are available in virtualized form and virtual machine (VM) is one of the main elements of visualization. In virtualization, a physical server changes into the virtual machine (VM) and acts as a physical server. Due to the large number of users sometimes the task sent by the user to cloud causes the VM to be under loaded or overloaded. This system state happens due to poor task allocation process in VM and causes the system failure or user tasks delayed. For the improvement of task allocation, several load balancing techniques are introduced in a cloud but stills the system failure occurs. Therefore, to overcome these problems, this study proposed an improved dynamic load balancing technique known as HBAC algorithm which dynamically allocates task by hybridizing Artificial Bee Colony (ABC) algorithm with Bat algorithm. The proposed HBAC algorithm was tested and compared with other stateof-the-art algorithms on 200 to 2000 even tasks by using CloudSim on standard workload format (SWF) data sets file size (200kb and 400kb). The proposed HBAC showed an improved accuracy rate in task distribution and reduced the makespan of VM in a cloud data center. Based on the ANOVA comparison test results, a 1.25 percent improvement on accuracy and 0.98 percent reduced makespan on task allocation system of VM in cloud computing is observed with the proposed HBAC algorithm

    Symbiotic Organisms Search Algorithm: theory, recent advances and applications

    Get PDF
    The symbiotic organisms search algorithm is a very promising recent metaheuristic algorithm. It has received a plethora of attention from all areas of numerical optimization research, as well as engineering design practices. it has since undergone several modifications, either in the form of hybridization or as some other improved variants of the original algorithm. However, despite all the remarkable achievements and rapidly expanding body of literature regarding the symbiotic organisms search algorithm within its short appearance in the field of swarm intelligence optimization techniques, there has been no collective and comprehensive study on the success of the various implementations of this algorithm. As a way forward, this paper provides an overview of the research conducted on symbiotic organisms search algorithms from inception to the time of writing, in the form of details of various application scenarios with variants and hybrid implementations, and suggestions for future research directions

    Reliable and energy efficient resource provisioning in cloud computing systems

    Get PDF
    Cloud Computing has revolutionized the Information Technology sector by giving computing a perspective of service. The services of cloud computing can be accessed by users not knowing about the underlying system with easy-to-use portals. To provide such an abstract view, cloud computing systems have to perform many complex operations besides managing a large underlying infrastructure. Such complex operations confront service providers with many challenges such as security, sustainability, reliability, energy consumption and resource management. Among all the challenges, reliability and energy consumption are two key challenges focused on in this thesis because of their conflicting nature. Current solutions either focused on reliability techniques or energy efficiency methods. But it has been observed that mechanisms providing reliability in cloud computing systems can deteriorate the energy consumption. Adding backup resources and running replicated systems provide strong fault tolerance but also increase energy consumption. Reducing energy consumption by running resources on low power scaling levels or by reducing the number of active but idle sitting resources such as backup resources reduces the system reliability. This creates a critical trade-off between these two metrics that are investigated in this thesis. To address this problem, this thesis presents novel resource management policies which target the provisioning of best resources in terms of reliability and energy efficiency and allocate them to suitable virtual machines. A mathematical framework showing interplay between reliability and energy consumption is also proposed in this thesis. A formal method to calculate the finishing time of tasks running in a cloud computing environment impacted with independent and correlated failures is also provided. The proposed policies adopted various fault tolerance mechanisms while satisfying the constraints such as task deadlines and utility values. This thesis also provides a novel failure-aware VM consolidation method, which takes the failure characteristics of resources into consideration before performing VM consolidation. All the proposed resource management methods are evaluated by using real failure traces collected from various distributed computing sites. In order to perform the evaluation, a cloud computing framework, 'ReliableCloudSim' capable of simulating failure-prone cloud computing systems is developed. The key research findings and contributions of this thesis are: 1. If the emphasis is given only to energy optimization without considering reliability in a failure prone cloud computing environment, the results can be contrary to the intuitive expectations. Rather than reducing energy consumption, a system ends up consuming more energy due to the energy losses incurred because of failure overheads. 2. While performing VM consolidation in a failure prone cloud computing environment, a significant improvement in terms of energy efficiency and reliability can be achieved by considering failure characteristics of physical resources. 3. By considering correlated occurrence of failures during resource provisioning and VM allocation, the service downtime or interruption is reduced significantly by 34% in comparison to the environments with the assumption of independent occurrence of failures. Moreover, measured by our mathematical model, the ratio of reliability and energy consumption is improved by 14%
    corecore