10,008 research outputs found

    Constraint handling strategies in Genetic Algorithms application to optimal batch plant design

    Get PDF
    Optimal batch plant design is a recurrent issue in Process Engineering, which can be formulated as a Mixed Integer Non-Linear Programming(MINLP) optimisation problem involving specific constraints, which can be, typically, the respect of a time horizon for the synthesis of various products. Genetic Algorithms constitute a common option for the solution of these problems, but their basic operating mode is not always wellsuited to any kind of constraint treatment: if those cannot be integrated in variable encoding or accounted for through adapted genetic operators, their handling turns to be a thorny issue. The point of this study is thus to test a few constraint handling techniques on a mid-size example in order to determine which one is the best fitted, in the framework of one particular problem formulation. The investigated methods are the elimination of infeasible individuals, the use of a penalty term added in the minimized criterion, the relaxation of the discrete variables upper bounds, dominancebased tournaments and, finally, a multiobjective strategy. The numerical computations, analysed in terms of result quality and of computational time, show the superiority of elimination technique for the former criterion only when the latter one does not become a bottleneck. Besides, when the problem complexity makes the random location of feasible space too difficult, a single tournament technique proves to be the most efficient one

    Porcellio scaber algorithm (PSA) for solving constrained optimization problems

    Full text link
    In this paper, we extend a bio-inspired algorithm called the porcellio scaber algorithm (PSA) to solve constrained optimization problems, including a constrained mixed discrete-continuous nonlinear optimization problem. Our extensive experiment results based on benchmark optimization problems show that the PSA has a better performance than many existing methods or algorithms. The results indicate that the PSA is a promising algorithm for constrained optimization.Comment: 6 pages, 1 figur

    Adaptive Ranking Based Constraint Handling for Explicitly Constrained Black-Box Optimization

    Full text link
    A novel explicit constraint handling technique for the covariance matrix adaptation evolution strategy (CMA-ES) is proposed. The proposed constraint handling exhibits two invariance properties. One is the invariance to arbitrary element-wise increasing transformation of the objective and constraint functions. The other is the invariance to arbitrary affine transformation of the search space. The proposed technique virtually transforms a constrained optimization problem into an unconstrained optimization problem by considering an adaptive weighted sum of the ranking of the objective function values and the ranking of the constraint violations that are measured by the Mahalanobis distance between each candidate solution to its projection onto the boundary of the constraints. Simulation results are presented and show that the CMA-ES with the proposed constraint handling exhibits the affine invariance and performs similarly to the CMA-ES on unconstrained counterparts.Comment: 9 page

    Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control

    Get PDF
    Constrained optimization of high-dimensional numerical problems plays an important role in many scientific and industrial applications. Function evaluations in many industrial applications are severely limited and no analytical information about objective function and constraint functions is available. For such expensive black-box optimization tasks, the constraint optimization algorithm COBRA was proposed, making use of RBF surrogate modeling for both the objective and the constraint functions. COBRA has shown remarkable success in solving reliably complex benchmark problems in less than 500 function evaluations. Unfortunately, COBRA requires careful adjustment of parameters in order to do so. In this work we present a new self-adjusting algorithm SACOBRA, which is based on COBRA and capable to achieve high-quality results with very few function evaluations and no parameter tuning. It is shown with the help of performance profiles on a set of benchmark problems (G-problems, MOPTA08) that SACOBRA consistently outperforms any COBRA algorithm with fixed parameter setting. We analyze the importance of the several new elements in SACOBRA and find that each element of SACOBRA plays a role to boost up the overall optimization performance. We discuss the reasons behind and get in this way a better understanding of high-quality RBF surrogate modeling
    corecore