198,340 research outputs found

    SQG-Differential Evolution for difficult optimization problems under a tight function evaluation budget

    Full text link
    In the context of industrial engineering, it is important to integrate efficient computational optimization methods in the product development process. Some of the most challenging simulation-based engineering design optimization problems are characterized by: a large number of design variables, the absence of analytical gradients, highly non-linear objectives and a limited function evaluation budget. Although a huge variety of different optimization algorithms is available, the development and selection of efficient algorithms for problems with these industrial relevant characteristics, remains a challenge. In this communication, a hybrid variant of Differential Evolution (DE) is introduced which combines aspects of Stochastic Quasi-Gradient (SQG) methods within the framework of DE, in order to improve optimization efficiency on problems with the previously mentioned characteristics. The performance of the resulting derivative-free algorithm is compared with other state-of-the-art DE variants on 25 commonly used benchmark functions, under tight function evaluation budget constraints of 1000 evaluations. The experimental results indicate that the new algorithm performs excellent on the 'difficult' (high dimensional, multi-modal, inseparable) test functions. The operations used in the proposed mutation scheme, are computationally inexpensive, and can be easily implemented in existing differential evolution variants or other population-based optimization algorithms by a few lines of program code as an non-invasive optional setting. Besides the applicability of the presented algorithm by itself, the described concepts can serve as a useful and interesting addition to the algorithmic operators in the frameworks of heuristics and evolutionary optimization and computing

    Are random coefficients needed in particle swarm optimization for simulation-based ship design?

    Get PDF
    Simulation-based design optimization (SBDO) methods integrate computer simu- lations, design modification tools, and optimization algorithms. In hydrodynamic applications, often objective functions are computationally expensive and likely noisy, their derivatives are not directly provided, and the existence of local minima cannot be excluded a priori, which motivates the use of derivative-free global optimization algorithms. This type of algorithms (such as Particle Swarm Optimization, PSO) usually follow a stochastic formulation, requiring computationally expensive numerical experiments in order to provide statistically significant re- sults. The objective of the present work is to investigate the effects of using (versus suppressing) random coefficients in PSO for ship hydrodynamics SBDO. A comparison is shown of 1,000 random PSO to deterministic PSO (DPSO) using 12 well-known scalable test problems, with dimensionality ranging from two to fifty. A total of 588 test functions is considered and more than 500,000 optimization runs are performed and evaluated. The results are discussed based on the probability of success of random PSO versus DPSO. Finally, a comparison of random PSO to DPSO is shown for the hull-form optimization of the DTMB 5415 model. In summary, test functions show the robustness of DPSO, which outperforms random PSO with odds of 30/1 for low-dimensional problems (indicatively N ≤ 30) and 5/1 for high-dimensional problems (N > 30). The hull-form SBDO (N = 11) shows how DPSO outperforms PSO with odds of 20/1. The use of DPSO in the SBDO context is therefore advised, especially if computationally expensive analyses are involved in the optimization

    Utilizing machine learning algorithms in the ensemble-based optimization (EnOpt) ‎method for enhancing gradient estimation‎

    Get PDF
    High or even prohibitive computational cost is one of the key limitations of robust ‎optimization using the Ensemble-based Optimization (EnOpt) approach, especially when a ‎computationally demanding forward model is involved (e.g., a reservoir simulation model). ‎It is because, in EnOpt, many realizations of the forward model are considered to represent ‎uncertainty, and many runs of forward modeling need to be performed to estimate gradients ‎for optimization. This work aims to develop, investigate, and discuss an approach, named ‎EnOpt-ML in the thesis, of utilizing machine learning (ML) methods for speeding up ‎EnOpt, particularly for the gradient estimation in the EnOpt method.‎ The significance of any deviations is investigated on three different optimization test ‎functions: Himmelblau, Bukin function number 6 and Rosenbrock for their different ‎characteristics. A thousand simulations are performed for each configuration setting to do ‎the analyses, compare means and standard deviations of the ensembles. Singled out cases ‎are shown as examples of gradient learning curves differences between EnOpt and EnOpt-‎ML, and the spread of their samples over the test function.‎ Objectives:‎ Objective1: Building of a code with a main function that would allow easy configurations ‎and tweaking of parameters of EnOpt, Machine learning (ML) algorithms and test function ‎or objective functions in general (with two variables). Codes necessary for test functions, ‎ML algorithms, plotting and simulation data saving files are defined outside of that main ‎function.‎ The code is attached in the Appendix. ‎ Objective2: Testing and analysis of results to detect any special improvement with EnOpt-‎ML compared to EnOpt. The use of Himmelblau as a primary test function was with a ‎modification of specific parameters, one at a time, starting with a base configuration case ‎for possible comparisons. After gathering traits of effects of those configurations, an ‎example where the improvement could show interesting were presented and then applied to ‎the other two test functions and analyzed. ‎ The main objective then has been to reduce the number of times the objective function is ‎evaluated while not considerably reducing the optimization quality. ‎ EnOpt-ML yielded slightly better results when compared to EnOpt under the same ‎conditions when fixing a maximum objective function evaluations through the number of ‎samples and the iteration at which this number is reduced.

    Impact analysis of crossovers in a multi-objective evolutionary algorithm

    Get PDF
    Multi-objective optimization has become mainstream because several real-world problems are naturally posed as a Multi-objective optimization problems (MOPs) in all fields of engineering and science. Usually MOPs consist of more than two conflicting objective functions and that demand trade-off solutions. Multi-objective evolutionary algorithms (MOEAs) are extremely useful and well-suited for solving MOPs due to population based nature. MOEAs evolve its population of solutions in a natural way and searched for compromise solutions in single simulation run unlike traditional methods. These algorithms make use of various intrinsic search operators in efficient manners. In this paper, we experimentally study the impact of different multiple crossovers in multi-objective evolutionary algorithm based on decomposition (MOEA/D) framework and evaluate its performance over test instances of 2009 IEEE congress on evolutionary computation (CEC?09) developed for MOEAs competition. Based on our carried out experiment, we observe that used variation operators are considered to main source to improve the algorithmic performance of MOEA/D for dealing with CEC?09 complicated test problems

    Enhancement of Voltage Deviationin a Power Systemby Rectifying OPF Troubles

    Get PDF
    This paper presents an evolutionary based approach to solve the optimal power flow (OPF) problem. For optimal settings of OPF control variables, the proposed approach utilizes Particle Swarm Optimization (PSO) algorithm. On standard IEEE 30-bus test system is observed and tested with various objective functions like voltage deviation enhancement and voltage profile improvement in this proposed approach. The outcome of IPSO-TVAC method has quality convergence attribute. Furthermore it shows the possible of the proposed approach and illustrates its usefulness and toughness to solve the OPF problem for the systems considered. The proposed approach simulation results are lesser than other optimization algorithms reported in the literature

    Quantile Optimization via Multiple Timescale Local Search for Black-box Functions

    Full text link
    We consider quantile optimization of black-box functions that are estimated with noise. We propose two new iterative three-timescale local search algorithms. The first algorithm uses an appropriately modified finite-difference-based gradient estimator that requires 2d2d + 1 samples of the black-box function per iteration of the algorithm, where dd is the number of decision variables (dimension of the input vector). For higher-dimensional problems, this algorithm may not be practical if the black-box function estimates are expensive. The second algorithm employs a simultaneous-perturbation-based gradient estimator that uses only three samples for each iteration regardless of problem dimension. Under appropriate conditions, we show the almost sure convergence of both algorithms. In addition, for the class of strongly convex functions, we further establish their (finite-time) convergence rate through a novel fixed-point argument. Simulation experiments indicate that the algorithms work well on a variety of test problems and compare well with recently proposed alternative methods

    A hybrid of Bayesian-based global search with Hooke–Jeeves local refinement for multi-objective optimization problems

    Get PDF
    The proposed multi-objective optimization algorithm hybridizes random global search with a local refinement algorithm. The global search algorithm mimics the Bayesian multi-objective optimization algorithm. The site of current computation of the objective functions by the proposed algorithm is selected by randomized simulation of the bi-objective selection by the Bayesian-based algorithm. The advantage of the new algorithm is that it avoids the inner complexity of Bayesian algorithms. A version of the Hooke–Jeeves algorithm is adapted for the local refinement of the approximation of the Pareto front. The developed hybrid algorithm is tested under conditions previously applied to test other Bayesian algorithms so that performance could be compared. Other experiments were performed to assess the efficiency of the proposed algorithm under conditions where the previous versions of Bayesian algorithms were not appropriate because of the number of objectives and/or dimensionality of the decision space

    An Efficient Global Optimization Scheme for Building Energy Simulation Based on Linear Radial Basis Function

    Get PDF
    Motivation: The building performance optimization is considerably increasing since the design goals are moving from the solely energy saving target to the optimization of the overall performances, cost and sustainability objectives. The evolutionary algorithms coupled with building simulation codes are often used in academic researches, however, they are limited applied in actual building design. Indeed, the high number of expensive simulation runs required by evolutionary algorithms strongly limits their suitability for the professional practice. For this reason, an efficient optimization scheme is essential for the diffusion of the building performance optimization tools outside the academic world. What was done: The research focuses on the development of an efficient global optimization scheme (EGO) based on a radial basis function network (RBFN) meta-model to emulate the expensive function evaluation by means of the building energy simulation. In this surrogate model, each cost function can be approximated by a linear combination of unknown coefficient multiplied by a set of linear radial-basis function. In the proposed method, the surrogate model is firstly used in the evolutionary algorithm code to find the optimal solutions. Then, the actual fitness functions are evaluated for the optimal points by means of building simulation and the surrogate model is then update. These steps are continued until the convergence criterion is met. This efficient optimization scheme has been implemented in Matlab and verified on some test cases. The test bed of the method is the optimal building refurbishment of three simplified existing buildings, for which the optimal solutions have been also calculated by using the brute force approach. Finally, the EGO performances were also compared with those offered by the popular Non Sorting Genetic Algorithm (NSGA-II). Expected benefits of what was done: The results of this research show how the EGO algorithm is able to find a large number of optimal solutions with a reduced number of expensive simulation runs. This makes it possible to apply the algorithm to the optimization of building projects that use expensive simulation codes such as lighting models, CFD codes or coupled dynamic simulation of building and HVAC systems

    A Multi-points Criterion for Deterministic Parallel Global Optimization based on Gaussian Processes

    Get PDF
    The optimization of expensive-to-evaluate functions generally relies on metamodel-based exploration strategies. Many deterministic global optimization algorithms used in the field of computer experiments are based on Kriging (Gaussian process regression). Starting with a spatial predictor including a measure of uncertainty, they proceed by iteratively choosing the point maximizing a criterion which is a compromise between predicted performance and uncertainty. Distributing the evaluation of such numerically expensive objective functions on many processors is an appealing idea. Here we investigate a multi-points optimization criterion, the multipoints expected improvement (q-EI), aimed at choosing several points at the same time. An analytical expression of the q-EI is given when q = 2, and a consistent statistical estimate is given for the general case. We then propose two classes of heuristic strategies meant to approximately optimize the q-EI, and apply them to Gaussian Processes and to the classical Branin-Hoo test-case function. It is finally demonstrated within the covered example that the latter strategies perform as good as the best Latin Hypercubes and Uniform Designs ever found by simulation (2000 designs drawn at random for every q in [1, 10])
    • …
    corecore