530 research outputs found

    Enhanced Multi-Strategy Particle Swarm Optimization for Constrained Problems with an Evolutionary-Strategies-Based Unfeasible Local Search Operator

    Get PDF
    Nowadays, optimization problems are solved through meta-heuristic algorithms based on stochastic search approaches borrowed from mimicking natural phenomena. Notwithstanding their successful capability to handle complex problems, the No-Free Lunch Theorem by Wolpert and Macready (1997) states that there is no ideal algorithm to deal with any kind of problem. This issue arises because of the nature of these algorithms that are not properly mathematics-based, and the convergence is not ensured. In the present study, a variant of the well-known swarm-based algorithm, the Particle Swarm Optimization (PSO), is developed to solve constrained problems with a different approach to the classical penalty function technique. State-of-art improvements and suggestions are also adopted in the current implementation (inertia weight, neighbourhood). Furthermore, a new local search operator has been implemented to help localize the feasible region in challenging optimization problems. This operator is based on hybridization with another milestone meta-heuristic algorithm, the Evolutionary Strategy (ES). The self-adaptive variant has been adopted because of its advantage of not requiring any other arbitrary parameter to be tuned. This approach automatically determines the parameters’ values that govern the Evolutionary Strategy simultaneously during the optimization process. This enhanced multi-strategy PSO is eventually tested on some benchmark constrained numerical problems from the literature. The obtained results are compared in terms of the optimal solutions with two other PSO implementations, which rely on a classic penalty function approach as a constraint-handling method

    EDA++: Estimation of Distribution Algorithms with Feasibility Conserving Mechanisms for Constrained Continuous Optimization

    Get PDF
    Handling non-linear constraints in continuous optimization is challenging, and finding a feasible solution is usually a difficult task. In the past few decades, various techniques have been developed to deal with linear and non-linear constraints. However, reaching feasible solutions has been a challenging task for most of these methods. In this paper, we adopt the framework of Estimation of Distribution Algorithms (EDAs) and propose a new algorithm (EDA++) equipped with some mechanisms to deal with non-linear constraints. These mechanisms are associated with different stages of the EDA, including seeding, learning and mapping. It is shown that, besides increasing the quality of the solutions in terms of objective values, the feasibility of the final solutions is guaranteed if an initial population of feasible solutions is seeded to the algorithm. The EDA with the proposed mechanisms is applied to two suites of benchmark problems for constrained continuous optimization and its performance is compared with some state-of-the-art algorithms and constraint handling methods. Conducted experiments confirm the speed, robustness and efficiency of the proposed algorithm in tackling various problems with linear and non-linear constraints.La Caixa Foundatio

    Evolutionary framework with reinforcement learning-based mutation adaptation

    Get PDF
    Although several multi-operator and multi-method approaches for solving optimization problems have been proposed, their performances are not consistent for a wide range of optimization problems. Also, the task of ensuring the appropriate selection of algorithms and operators may be inefficient since their designs are undertaken mainly through trial and error. This research proposes an improved optimization framework that uses the benefits of multiple algorithms, namely, a multi-operator differential evolution algorithm and a co-variance matrix adaptation evolution strategy. In the former, reinforcement learning is used to automatically choose the best differential evolution operator. To judge the performance of the proposed framework, three benchmark sets of bound-constrained optimization problems (73 problems) with 10, 30 and 50 dimensions are solved. Further, the proposed algorithm has been tested by solving optimization problems with 100 dimensions taken from CEC2014 and CEC2017 benchmark problems. A real-world application data set has also been solved. Several experiments are designed to analyze the effects of different components of the proposed framework, with the best variant compared with a number of state-of-the-art algorithms. The experimental results show that the proposed algorithm is able to outperform all the others considered.</p

    Derivative-Free Optimization

    Get PDF
    Abstract. In many engineering applications it is common to find optimization problems where the cost function and/or constraints require complex simulations. Though it is often, but not always, theoretically possible in these cases to extract derivative information efficiently, the associated implementation procedures are typically non-trivial and time-consuming (e.g., adjoint-based methodologies). Derivative-free (non-invasive, black-box) optimization has lately received considerable attention within the optimization community, including the establishment of solid mathematical foundations for many of the methods considered in practice. In this chapter we will describe some of the most conspicuous derivative-free optimization techniques. Our depiction will concentrate first on local optimization such as pattern search techniques, and other methods based on interpolation/approximation. Then, we will survey a number of global search methodologies, and finally give guidelines on constraint handling approaches

    Investigating hybrids of evolution and learning for real-parameter optimization

    Get PDF
    In recent years, more and more advanced techniques have been developed in the field of hybridizing of evolution and learning, this means that more applications with these techniques can benefit from this progress. One example of these advanced techniques is the Learnable Evolution Model (LEM), which adopts learning as a guide for the general evolutionary search. Despite this trend and the progress in LEM, there are still many ideas and attempts which deserve further investigations and tests. For this purpose, this thesis has developed a number of new algorithms attempting to combine more learning algorithms with evolution in different ways. With these developments, we expect to understand the effects and relations between evolution and learning, and also achieve better performances in solving complex problems. The machine learning algorithms combined into the standard Genetic Algorithm (GA) are the supervised learning method k-nearest-neighbors (KNN), the Entropy-Based Discretization (ED) method, and the decision tree learning algorithm ID3. We test these algorithms on various real-parameter function optimization problems, especially the functions in the special session on CEC 2005 real-parameter function optimization. Additionally, a medical cancer chemotherapy treatment problem is solved in this thesis by some of our hybrid algorithms. The performances of these algorithms are compared with standard genetic algorithms and other well-known contemporary evolution and learning hybrid algorithms. Some of them are the CovarianceMatrix Adaptation Evolution Strategies (CMAES), and variants of the Estimation of Distribution Algorithms (EDA). Some important results have been derived from our experiments on these developed algorithms. Among them, we found that even some very simple learning methods hybridized properly with evolution procedure can provide significant performance improvement; and when more complex learning algorithms are incorporated with evolution, the resulting algorithms are very promising and compete very well against the state of the art hybrid algorithms both in well-defined real-parameter function optimization problems and a practical evaluation-expensive problem

    Evaluating MAP-Elites on Constrained Optimization Problems

    Full text link
    Constrained optimization problems are often characterized by multiple constraints that, in the practice, must be satisfied with different tolerance levels. While some constraints are hard and as such must be satisfied with zero-tolerance, others may be soft, such that non-zero violations are acceptable. Here, we evaluate the applicability of MAP-Elites to "illuminate" constrained search spaces by mapping them into feature spaces where each feature corresponds to a different constraint. On the one hand, MAP-Elites implicitly preserves diversity, thus allowing a good exploration of the search space. On the other hand, it provides an effective visualization that facilitates a better understanding of how constraint violations correlate with the objective function. We demonstrate the feasibility of this approach on a large set of benchmark problems, in various dimensionalities, and with different algorithmic configurations. As expected, numerical results show that a basic version of MAP-Elites cannot compete on all problems (especially those with equality constraints) with state-of-the-art algorithms that use gradient information or advanced constraint handling techniques. Nevertheless, it has a higher potential at finding constraint violations vs. objectives trade-offs and providing new problem information. As such, it could be used in the future as an effective building-block for designing new constrained optimization algorithms
    corecore