45,258 research outputs found

    A derivative-free algorithm for bound constrained optimization.

    Get PDF
    In this work, we propose a new globally convergent derivative-free algorithm for the minimization of a continuously differentiable function in the case that some of (or all) the variables are bounded. This algorithm investigates the local behaviour of the objective function on the feasible set by sampling it along the coordinate directions. Whenever a "suitable" descent feasible coordinate direction is detected a new point is produced by performing a linesearch along this direction. The information progressively obtained during the iterates of the algorithm can be used to build an approximation model of the objective function. The minimum of such a model is accepted if it produces an improvement of the objective function value. We also derive a bound for the limit accuracy of the algorithm in the minimization of noisy functions. Finally, we report the results of a preliminary numerical experience

    Active-set strategy in Powell's method for optimization without derivatives

    Get PDF
    In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms. Mathematical subject classification: Primary: 06B10; Secondary: 06D05.Facultad de Ciencias Exacta

    Active-set strategy in Powell's method for optimization without derivatives

    Get PDF
    In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms. Mathematical subject classification: Primary: 06B10; Secondary: 06D05.Facultad de Ciencias Exacta

    Active-set strategy in Powell's method for optimization without derivatives

    Get PDF
    In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trust-region framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an active-set strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell's algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell's algorithms. Mathematical subject classification: Primary: 06B10; Secondary: 06D05.Facultad de Ciencias Exacta

    Hooke and Jeeves based multilevel coordinate search to globally solving nonsmooth problems

    Get PDF
    Publicado em "AIP Conference Proceedings", vol. 1558In this paper, we present a derivative-free multilevel coordinate search (MCS) approach, that relies on the Hooke and Jeeves local search, for globally solving bound constrained optimization problems. Numerical experiments show that the proposed algorithm is effective in solving benchmark problems, when compared with the well-known solvers MCS and DIRECT.Fundação para a Ciência e a Tecnologia (FCT

    Hybrid optimization coupling electromagnetism and descent search for engineering problems

    Get PDF
    In this paper, we present a new stochastic hybrid technique for constrained global optimization. It is a combination of the electromagnetism-like (EM) mechanism with an approximate descent search, which is a derivative-free procedure with high ability of producing a descent direction. Since the original EM algorithm is specifically designed for solving bound constrained problems, the approach herein adopted for handling the constraints of the problem relies on a simple heuristic denoted by feasibility and dominance rules. The hybrid EM method is tested on four well-known engineering design problems and the numerical results demonstrate the effectiveness of the proposed approach

    Derivative-Free Bound-Constrained Optimization for Solving Structured Problems with Surrogate Models

    Full text link
    We propose and analyze a model-based derivative-free (DFO) algorithm for solving bound-constrained optimization problems where the objective function is the composition of a smooth function and a vector of black-box functions. We assume that the black-box functions are smooth and the evaluation of them is the computational bottleneck of the algorithm. The distinguishing feature of our algorithm is the use of approximate function values at interpolation points which can be obtained by an application-specific surrogate model that is cheap to evaluate. As an example, we consider the situation in which a sequence of related optimization problems is solved and present a regression-based approximation scheme that uses function values that were evaluated when solving prior problem instances. In addition, we propose and analyze a new algorithm for obtaining interpolation points that handles unrelaxable bound constraints. Our numerical results show that our algorithm outperforms a state-of-the-art DFO algorithm for solving a least-squares problem from a chemical engineering application when a history of black-box function evaluations is available

    Initial Particles Position for PSO, in Bound Constrained Optimization

    Get PDF
    We consider the solution of bound constrained optimization problems, where we assume that the evaluation of the objective function is costly, its derivatives are unavailable and the use of exact derivative free algorithms may imply a too large computational burden. There is plenty of real applications, e.g. several design optimization problems [1,2], belonging to the latter class, where the objective function must be treated as a ‘black-box’ and automatic differentiation turns to be unsuitable. Since the objective function is often obtained as the result of a simulation, it might be affected also by noise, so that the use of finite differences may be definitely harmful. In this paper we consider the use of the evolutionary Particle Swarm Optimization (PSO) algorithm, where the choice of the parameters is inspired by [4], in order to avoid diverging trajectories of the particles, and help the exploration of the feasible set. Moreover, we extend the ideas in [4] and propose a specific set of initial particles position for the bound constrained problem

    Hybridizing the electromagnetism-like algorithm with descent search for solving engineering design problems

    Get PDF
    In this paper, we present a new stochastic hybrid technique for constrained global optimization. It is a combination of the electromagnetism-like (EM) mechanism with a random local search, which is a derivative-free procedure with high ability of producing a descent direction. Since the original EM algorithm is specifically designed for solving bound constrained problems, the approach herein adopted for handling the inequality constraints of the problem relies on selective conditions that impose a sufficient reduction either in the constraints violation or in the objective function value, when comparing two points at a time. The hybrid EM method is tested on a set of benchmark engineering design problems and the numerical results demonstrate the effectiveness of the proposed approach. A comparison with results from other stochastic methods is also included
    corecore