16,400 research outputs found

    A multi-cycled sequential memetic computing approach for constrained optimisation

    Get PDF
    In this paper, we propose a multi-cycled sequential memetic computing structure for constrained optimisation. The structure is composed of multiple evolutionary cycles. At each cycle, an evolutionary algorithm is considered as an operator, and connects with a local optimiser. This structure enables the learning of useful knowledge from previous cycles and the transfer of the knowledge to facilitate search in latter cycles. Specifically, we propose to apply an estimation of distribution algorithm (EDA) to explore the search space until convergence at each cycle. A local optimiser, called DONLP2, is then applied to improve the best solution found by the EDA. New cycle starts after the local improvement if the computation budget has not been exceeded. In the developed EDA, an adaptive fully-factorized multivariate probability model is proposed. A learning mechanism, implemented as the guided mutation operator, is adopted to learn useful knowledge from previous cycles. The developed algorithm was experimentally studied on the benchmark problems in the CEC 2006 and 2010 competition. Experimental studies have shown that the developed probability model exhibits excellent exploration capability and the learning mechanism can significantly improve the search efficiency under certain conditions. The comparison against some well-known algorithms showed the superiority of the developed algorithm in terms of the consumed fitness evaluations and the solution quality

    An island based hybrid evolutionary algorithm for optimization

    Get PDF
    This is a post-print version of the article - Copyright @ 2008 Springer-VerlagEvolutionary computation has become an important problem solving methodology among the set of search and optimization techniques. Recently, more and more different evolutionary techniques have been developed, especially hybrid evolutionary algorithms. This paper proposes an island based hybrid evolutionary algorithm (IHEA) for optimization, which is based on Particle swarm optimization (PSO), Fast Evolutionary Programming (FEP), and Estimation of Distribution Algorithm (EDA). Within IHEA, an island model is designed to cooperatively search for the global optima in search space. By combining the strengths of the three component algorithms, IHEA greatly improves the optimization performance of the three basic algorithms. Experimental results demonstrate that IHEA outperforms all the three component algorithms on the test problems.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/1

    Bat Algorithm: Literature Review and Applications

    Full text link
    Bat algorithm (BA) is a bio-inspired algorithm developed by Yang in 2010 and BA has been found to be very efficient. As a result, the literature has expanded significantly in the last 3 years. This paper provides a timely review of the bat algorithm and its new variants. A wide range of diverse applications and case studies are also reviewed and summarized briefly here. Further research topics are also discussed.Comment: 10 page

    Comparative study on the application of evolutionary optimization techniques to orbit transfer maneuvers

    Get PDF
    Orbit transfer maneuvers are here considered as benchmark cases for comparing performance of different optimization techniques in the framework of direct methods. Two different classes of evolutionary algorithms, a conventional genetic algorithm and an estimation of distribution method, are compared in terms of performance indices statistically evaluated over a prescribed number of runs. At the same time, two different types of problem representations are considered, a first one based on orbit propagation and a second one based on the solution of Lambert’s problem for direct transfers. In this way it is possible to highlight how problem representation affects the capabilities of the considered numerical approaches

    Task Runtime Prediction in Scientific Workflows Using an Online Incremental Learning Approach

    Full text link
    Many algorithms in workflow scheduling and resource provisioning rely on the performance estimation of tasks to produce a scheduling plan. A profiler that is capable of modeling the execution of tasks and predicting their runtime accurately, therefore, becomes an essential part of any Workflow Management System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS) platforms that use clouds for deploying scientific workflows, task runtime prediction becomes more challenging because it requires the processing of a significant amount of data in a near real-time scenario while dealing with the performance variability of cloud resources. Hence, relying on methods such as profiling tasks' execution data using basic statistical description (e.g., mean, standard deviation) or batch offline regression techniques to estimate the runtime may not be suitable for such environments. In this paper, we propose an online incremental learning approach to predict the runtime of tasks in scientific workflows in clouds. To improve the performance of the predictions, we harness fine-grained resources monitoring data in the form of time-series records of CPU utilization, memory usage, and I/O activities that are reflecting the unique characteristics of a task's execution. We compare our solution to a state-of-the-art approach that exploits the resources monitoring data based on regression machine learning technique. From our experiments, the proposed strategy improves the performance, in terms of the error, up to 29.89%, compared to the state-of-the-art solutions.Comment: Accepted for presentation at main conference track of 11th IEEE/ACM International Conference on Utility and Cloud Computin

    Dual population-based incremental learning for problem optimization in dynamic environments

    Get PDF
    Copyright @ 2003 Asia Pacific Symposium on Intelligent and Evolutionary SystemsIn recent years there is a growing interest in the research of evolutionary algorithms for dynamic optimization problems since real world problems are usually dynamic, which presents serious challenges to traditional evolutionary algorithms. In this paper, we investigate the application of Population-Based Incremental Learning (PBIL) algorithms, a class of evolutionary algorithms, for problem optimization under dynamic environments. Inspired by the complementarity mechanism in nature, we propose a Dual PBIL that operates on two probability vectors that are dual to each other with respect to the central point in the search space. Using a dynamic problem generating technique we generate a series of dynamic knapsack problems from a randomly generated stationary knapsack problem and carry out experimental study comparing the performance of investigated PBILs and one traditional genetic algorithm. Experimental results show that the introduction of dualism into PBIL improves its adaptability under dynamic environments, especially when the environment is subject to significant changes in the sense of genotype space
    • 

    corecore