1,572 research outputs found

    Adaptive Ranking Based Constraint Handling for Explicitly Constrained Black-Box Optimization

    Full text link
    A novel explicit constraint handling technique for the covariance matrix adaptation evolution strategy (CMA-ES) is proposed. The proposed constraint handling exhibits two invariance properties. One is the invariance to arbitrary element-wise increasing transformation of the objective and constraint functions. The other is the invariance to arbitrary affine transformation of the search space. The proposed technique virtually transforms a constrained optimization problem into an unconstrained optimization problem by considering an adaptive weighted sum of the ranking of the objective function values and the ranking of the constraint violations that are measured by the Mahalanobis distance between each candidate solution to its projection onto the boundary of the constraints. Simulation results are presented and show that the CMA-ES with the proposed constraint handling exhibits the affine invariance and performs similarly to the CMA-ES on unconstrained counterparts.Comment: 9 page

    Optimization. An attempt at describing the State of the Art

    Get PDF
    This paper is an attempt at describing the State of the Art of the vast field of continuous optimization. We will survey deterministic and stochastic methods as well as hybrid approaches in their application to single objective and multiobjective optimization. We study the parameters of optimization algorithms and possibilities for tuning them. Finally, we discuss several methods for using approximate models for computationally expensive problems

    Revisiting Implicit and Explicit Averaging for Noisy Optimization

    Get PDF
    Explicit and implicit averaging are two well-known strategies for noisy optimization. Both strategies can counteract the disruptive effect of noise; however, a critical question remains: which one is more efficient? This question has been raised in many studies, with conflicting preferences and, in some cases, findings. Nevertheless, theoretical findings on the noisy sphere problem with additive Gaussian noise supports the superiority of implicit averaging, which may have had a strong impact on the preference of implicit averaging in more recent evolutionary methods for noisy optimization. This study speculates that the analytically supported superiority of implicit averaging relies on specific features of the noisy sphere problem with additive noise, which cannot be generalized to other problems. It enumerates these features and designs controlled numerical experiments to investigate this potential reliance. Each experiment gradually suppresses one specific feature, and the progress rate is numerically calculated for different values of the sample size given a fixed evaluation budget. Our empirical results indicate that for a wide range of noise strength and evaluation budget per iteration, the more these specific features are suppressed, the more the optimal averaging strategy deviates from implicit toward explicit averaging, which confirms our speculations. Consequently, the optimal sample size, which is regarded as the tradeoff between implicit and explicit averaging, depends on the problem characteristics and should be learned during optimization for maximum efficiency

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality

    Regularization-free multicriteria optimization of polymer viscoelasticity model

    Get PDF
    This paper introduces a multiobjective optimization (MOP) method for nonlinear regression analysis which is capable of simultaneously minimizing the model order and estimating parameter values without the need of exogenous regularization constraints. The method is introduced through a case study in polymer rheology modeling. Prevailing approaches in this field tackle conflicting optimization goals as a monobjective problem by aggregating individual regression errors on each dependent variable into a single weighted scalarization function. In addition, their supporting deterministic numerical methods often rely on assumptions which are extrinsic to the problem, such as regularization constants and restrictions on parameter distribution, thereby introducing methodology inherent biases into the model. Our proposed non-deterministic MOP strategy, on the other hand, aims at finding the Pareto-front of all optimal solutions with respect not only to individual regression errors, but also to the number of parameters needed to fit the data, automatically reducing the model order. The evolutionary computation approach does not require arbitrary constraints on objective weights, regularization parameters or other exogenous assumptions to handle the ill-posed inverse problem. The article discusses the method rationales, implementation, simulation experiments, and comparison with other methods, with experimental evidences that it can outperform state-of-art techniques. While the discussion focuses on the study case, the introduced method is general and immediately applicable to other problem domains.This work is funded by National Funds through FCT - Portuguese Foundation for Science and Technology, References UIDB/05256/2020 and UIDP/05256/2020 and the European project MSCA-RISE-2015, NEWEX, Reference 734205

    Digital Filter Design Using Improved Artificial Bee Colony Algorithms

    Get PDF
    Digital filters are often used in digital signal processing applications. The design objective of a digital filter is to find the optimal set of filter coefficients, which satisfies the desired specifications of magnitude and group delay responses. Evolutionary algorithms are population-based meta-heuristic algorithms inspired by the biological behaviors of species. Compared to gradient-based optimization algorithms such as steepest descent and Newton’s like methods, these bio-inspired algorithms have the advantages of not getting stuck at local optima and being independent of the starting point in the solution space. The limitations of evolutionary algorithms include the presence of control parameters, problem specific tuning procedure, premature convergence and slower convergence rate. The artificial bee colony (ABC) algorithm is a swarm-based search meta-heuristic algorithm inspired by the foraging behaviors of honey bee colonies, with the benefit of a relatively fewer control parameters. In its original form, the ABC algorithm has certain limitations such as low convergence rate, and insufficient balance between exploration and exploitation in the search equations. In this dissertation, an ABC-AMR algorithm is proposed by incorporating an adaptive modification rate (AMR) into the original ABC algorithm to increase convergence rate by adjusting the balance between exploration and exploitation in the search equations through an adaptive determination of the number of parameters to be updated in every iteration. A constrained ABC-AMR algorithm is also developed for solving constrained optimization problems.There are many real-world problems requiring simultaneous optimizations of more than one conflicting objectives. Multiobjective (MO) optimization produces a set of feasible solutions called the Pareto front instead of a single optimum solution. For multiobjective optimization, if a decision maker’s preferences can be incorporated during the optimization process, the search process can be confined to the region of interest instead of searching the entire region. In this dissertation, two algorithms are developed for such incorporation. The first one is a reference-point-based MOABC algorithm in which a decision maker’s preferences are included in the optimization process as the reference point. The second one is a physical-programming-based MOABC algorithm in which physical programming is used for setting the region of interest of a decision maker. In this dissertation, the four developed algorithms are applied to solve digital filter design problems. The ABC-AMR algorithm is used to design Types 3 and 4 linear phase FIR differentiators, and the results are compared to those obtained by the original ABC algorithm, three improved ABC algorithms, and the Parks-McClellan algorithm. The constrained ABC-AMR algorithm is applied to the design of sparse Type 1 linear phase FIR filters of filter orders 60, 70 and 80, and the results are compared to three state-of-the-art design methods. The reference-point-based multiobjective ABC algorithm is used to design of asymmetric lowpass, highpass, bandpass and bandstop FIR filters, and the results are compared to those obtained by the preference-based multiobjective differential evolution algorithm. The physical-programming-based multiobjective ABC algorithm is used to design IIR lowpass, highpass and bandpass filters, and the results are compared to three state-of-the-art design methods. Based on the obtained design results, the four design algorithms are shown to be competitive as compared to the state-of-the-art design methods

    CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?

    Full text link
    The covariance matrix adaptation evolution strategy (CMA-ES) is one of the most successful methods for solving black-box continuous optimization problems. One practically useful aspect of the CMA-ES is that it can be used without hyperparameter tuning. However, the hyperparameter settings still have a considerable impact, especially for difficult tasks such as solving multimodal or noisy problems. In this study, we investigate whether the CMA-ES with default population size can solve multimodal and noisy problems. To perform this investigation, we develop a novel learning rate adaptation mechanism for the CMA-ES, such that the learning rate is adapted so as to maintain a constant signal-to-noise ratio. We investigate the behavior of the CMA-ES with the proposed learning rate adaptation mechanism through numerical experiments, and compare the results with those obtained for the CMA-ES with a fixed learning rate. The results demonstrate that, when the proposed learning rate adaptation is used, the CMA-ES with default population size works well on multimodal and/or noisy problems, without the need for extremely expensive learning rate tuning.Comment: Nominated for the best paper of GECCO'23 ENUM Track. We have corrected the error of Eq.(7
    • …
    corecore