30,507 research outputs found

    A Derivative-Free Algorithm for Linearly Constrained Optimization Problems

    Get PDF
    Derivative-free optimization is an active area of research, because there are many practical problems for which the derivatives are not available, and it may still be desirable to carry out optimization. The main motivation for the study of such problems is the high demand for the solution for such problems. In this thesis a new derivative-free algorithm has been developed, named LCOBYQA. The main aim of this algorithm is to nd a minimum x? 2 Rn of a nonlinear objective subject to linearly inequality constraints. The algorithm is based on the trust region method, and uses well known techniques such as the active set version of truncated conjugate gradient method, multivariate Lagrange polynomial interpolation, and QR factorization. Each iteration of the algorithm constructs a quadratic approximation (model) of the objective function that satis es interpolation conditions and leaves some freedom in the model, taken up by minimizing the Frobenius norm of the change of the second derivative of the model. A typical iteration of the algorithm generates a new vector of variables either by minimizing the quadratic model subject to the given constraints and the trust region bound, or by a procedure that should improve the accuracy of the model. Numerical results show that LCOBYQA works well and is so competing against available model-based derivative-free algorithms, such as CONDOR, COBYLA, UOBYQA, NEWUOA and DFO. Under certain conditions LCOBYQA is observed to work extremmely and amazingly fast, leaving an open further investigation to be considered

    Inexact restoration method for derivative-free optimization with smooth constraints

    Get PDF
    Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the inexact restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only the constraints, in order to improve feasibility. In the second phase one minimizes a suitable objective function subject to a linear approximation of the constraints. The second phase must be solved using derivative-free methods. An algorithm introduced recently by Kolda, Lewis, and Torczon for linearly constrained derivative-free optimization is employed for this purpose. Under usual assumptions, convergence to stationary points is proved. A computer implementation is described and numerical experiments are presented.A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the inexact restoration framework, by means of23211891213CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPERJ - FUNDAÇÃO CARLOS CHAGAS FILHO DE AMPARO À PESQUISA DO ESTADO DO RIO DE JANEIROFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)CNPq [E-26/171.164/2003-APQ1]FAPESP [FAPESP 2011-51305-0]FAPESP [03/09169-6, 06/53768-0, 07/06663-0, 08/00468-4]E-26/171.164/2003–APQ12011-51305-0; 03/09169-6; 06/53768-0; 07/06663-0; 08/00468-4sem informaçãoWe are indebted to associate editor Prof. Margaret Wright and two anonymous referees for many useful comments and remarks that led to significant improvement of this pape

    Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    Get PDF
    A new class of algorithms is introduced and analyzed for bound and linearly constrained optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is extended to a new problem setting in which objective function evaluations require sampling from a model of a stochastic system. The approach combines GPS with ranking and selection (R&S) statistical procedures to select new iterates. The derivative-free algorithms require only black-box simulation responses and are applicable over domains with mixed variables (continuous, discrete numeric, and discrete categorical) to include bound and linear constraints on the continuous variables. A convergence analysis for the general class of algorithms establishes almost sure convergence of an iteration subsequence to stationary points appropriately defined in the mixed-variable domain. Additionally, specific algorithm instances are implemented that provide computational enhancements to the basic algorithm. Implementation alternatives include the use modern R&S procedures designed to provide efficient sampling strategies and the use of surrogate functions that augment the search by approximating the unknown objective function with nonparametric response surfaces. In a computational evaluation, six variants of the algorithm are tested along with four competing methods on 26 standardized test problems. The numerical results validate the use of advanced implementations as a means to improve algorithm performance
    corecore