115,217 research outputs found

    A Multi-Layer Line Search Method to Improve the Initialization of Optimization Algorithms

    Get PDF
    International audienceWe introduce a novel metaheuristic methodology to improve the initializationof a given deterministic or stochastic optimization algorithm. Our objectiveis to improve the performance of the considered algorithm, calledcore optimization algorithm, by reducing its number of cost function evaluations,by increasing its success rate and by boosting the precision of itsresults. In our approach, the core optimization is considered as a suboptimizationproblem for a multi-layer line search method. The approachis presented and implemented for various particular core optimization algorithms:Steepest Descent, Heavy-Ball, Genetic Algorithm, Differential Evolutionand Controlled Random Search. We validate our methodology byconsidering a set of low and high dimensional benchmark problems (i.e.,problems of dimension between 2 and 1000). The results are compared tothose obtained with the core optimization algorithms alone and with twoadditional global optimization methods (Direct Tabu Search and ContinuousGreedy Randomized Adaptive Search). These latter also aim at improvingthe initial condition for the core algorithms. The numerical results seemto indicate that our approach improves the performances of the core optimizationalgorithms and allows to generate algorithms more efficient thanthe other optimization methods studied here. A Matlab optimization packagecalled ”Global Optimization Platform” (GOP), implementing the algorithmspresented here, has been developed and can be downloaded at:http://www.mat.ucm.es/momat/software.ht

    A Multi-Layer Line Search Method to Improve the Initialization of Optimization Algorithms (Preprint submitted to Optimization Online)

    Get PDF
    We introduce a novel metaheuristic methodology to improve the initialization of a given deterministic or stochastic optimization algorithm. Our objective is to improve the performance of the considered algorithm, called core optimization algorithm, by reducing its number of cost function evaluations, by increasing its success rate and by boosting the precision of its results. In our approach, the core optimization is considered as a suboptimization problem for a multi-layer line search method. The approach is presented and implemented for various particular core optimization algorithms: Steepest Descent, Heavy-Ball, Genetic Algorithm, Differential Evolution and Controlled Random Search. We validate our methodology by considering a set of low and high dimensional benchmark problems (i.e., problems of dimension between 2 and 1000). The results are compared to those obtained with the core optimization algorithms alone and with two additional global optimization methods (Direct Tabu Search and Continuous Greedy Randomized Adaptive Search). These latter also aim at improving the initial condition for the core algorithms. The numerical results seem to indicate that our approach improves the performances of the core optimization algorithms and allows to generate algorithms more efficient than the other optimization methods studied here. A Matlab optimization package called ”Global Optimization Platform” (GOP), implementing the algorithms presented here, has been developed and can be downloaded at: http://www.mat.ucm.es/momat/software.ht

    3D shape optimisation of a low-pressure turbine stage

    Full text link
    The possibility of reducing the flow losses in low-pressure turbine stage has been investigated in an iterative process using a novel hybrid optimisation algorithm. Values of the maximised objective function that is isentropic efficiency are found from 3D RANS computation of the flowpath geometry, which was being changed during the optimisation process. To secure the global flow conditions, the constraints have been imposed on the mass flow rate and reaction. Among the optimised parameters are stator and rotor twist angles, stator sweep and lean, both straight and compound. Blade profiles remained unchanged during the optimisation. A new hybrid stochastic-deterministic algorithm was used for the optimisation of the flowpath. In the proposed algorithm, the bat algorithm was combined with the direct search method of Nelder-Mead in order to refine the best obtained solution from the standard bat algorithm. The method was tested on a wide variety of well-known test functions. Also, the results of the optimisation of the other stochastic and deterministic methods were compared and discussed. The optimisation gives new 3D-stage designs with increased efficiency comparing to the original design.This work was supported by The National Science Centre, Grant No. 2015/17/N/ST8/01782

    Parallel Deterministic and Stochastic Global Minimization of Functions with Very Many Minima

    Get PDF
    The optimization of three problems with high dimensionality and many local minima are investigated under five different optimization algorithms: DIRECT, simulated annealing, Spall’s SPSA algorithm, the KNITRO package, and QNSTOP, a new algorithm developed at Indiana University

    A hybrid multiagent approach for global trajectory optimization

    Get PDF
    In this paper we consider a global optimization method for space trajectory design problems. The method, which actually aims at finding not only the global minimizer but a whole set of low-lying local minimizers(corresponding to a set of different design options), is based on a domain decomposition technique where each subdomain is evaluated through a procedure based on the evolution of a population of agents. The method is applied to two space trajectory design problems and compared with existing deterministic and stochastic global optimization methods

    Stationary probability density of stochastic search processes in global optimization

    Full text link
    A method for the construction of approximate analytical expressions for the stationary marginal densities of general stochastic search processes is proposed. By the marginal densities, regions of the search space that with high probability contain the global optima can be readily defined. The density estimation procedure involves a controlled number of linear operations, with a computational cost per iteration that grows linearly with problem size
    • 

    corecore