628 research outputs found

    An inexact restoration derivative-free filter method for nonlinear programming

    Get PDF
    An inexact restoration derivative-free filter method for nonlinear programming is introduced in this paper. Each iteration is composed of a restoration phase, which reduces a measure of infeasibility, and an optimization phase, which reduces the objective function. The restoration phase is solved using a derivative-free method for solving underdetermined nonlinear systems with bound constraints, developed previously by the authors. An alternative for solving the optimization phase is considered. Theoretical convergence results and some preliminary numerical experiments are presented.Departamento de Matemátic

    Economic inexact restoration for derivative-free expensive function minimization and applications

    Full text link
    The Inexact Restoration approach has proved to be an adequate tool for handling the problem of minimizing an expensive function within an arbitrary feasible set by using different degrees of precision in the objective function. The Inexact Restoration framework allows one to obtain suitable convergence and complexity results for an approach that rationally combines low- and high-precision evaluations. In the present research, it is recognized that many problems with expensive objective functions are nonsmooth and, sometimes, even discontinuous. Having this in mind, the Inexact Restoration approach is extended to the nonsmooth or discontinuous case. Although optimization phases that rely on smoothness cannot be used in this case, basic convergence and complexity results are recovered. A derivative-free optimization phase is defined and the subproblems that arise at this phase are solved using a regularization approach that take advantage of different notions of stationarity. The new methodology is applied to the problem of reproducing a controlled experiment that mimics the failure of a dam

    Inexact restoration method for derivative-free optimization with smooth constraints

    Get PDF
    Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the inexact restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only the constraints, in order to improve feasibility. In the second phase one minimizes a suitable objective function subject to a linear approximation of the constraints. The second phase must be solved using derivative-free methods. An algorithm introduced recently by Kolda, Lewis, and Torczon for linearly constrained derivative-free optimization is employed for this purpose. Under usual assumptions, convergence to stationary points is proved. A computer implementation is described and numerical experiments are presented.A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the inexact restoration framework, by means of23211891213CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPERJ - FUNDAÇÃO CARLOS CHAGAS FILHO DE AMPARO À PESQUISA DO ESTADO DO RIO DE JANEIROFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)CNPq [E-26/171.164/2003-APQ1]FAPESP [FAPESP 2011-51305-0]FAPESP [03/09169-6, 06/53768-0, 07/06663-0, 08/00468-4]E-26/171.164/2003–APQ12011-51305-0; 03/09169-6; 06/53768-0; 07/06663-0; 08/00468-4sem informaçãoWe are indebted to associate editor Prof. Margaret Wright and two anonymous referees for many useful comments and remarks that led to significant improvement of this pape

    Assessing the reliability of general-purpose Inexact Restoration methods

    Get PDF
    Inexact Restoration methods have been proved to be effective to solve constrained optimization problems in which some structure of the feasible set induces a natural way of recovering feasibility from arbitrary infeasible points. Sometimes natural ways of dealing with minimization over tangent approximations of the feasible set are also employed. A recent paper Banihashemi and Kaya (2013)] suggests that the Inexact Restoration approach can be competitive with well-established nonlinear programming solvers when applied to certain control problems without any problem-oriented procedure for restoring feasibility. This result motivated us to revisit the idea of designing general-purpose Inexact Restoration methods, especially for large-scale problems. In this paper we introduce affordable algorithms of Inexact Restoration type for solving arbitrary nonlinear programming problems and we perform the first experiments that aim to assess their reliability. Initially, we define a purely local Inexact Restoration algorithm with quadratic convergence. Then, we modify the local algorithm in order to increase the chances of success of both the restoration and the optimization phase. This hybrid algorithm is intermediate between the local algorithm and a globally convergent one for which, under suitable assumptions, convergence to KKT points can be proved28

    Multi-Objective Trust-Region Filter Method for Nonlinear Constraints using Inexact Gradients

    Full text link
    In this article, we build on previous work to present an optimization algorithm for nonlinearly constrained multi-objective optimization problems. The algorithm combines a surrogate-assisted derivative-free trust-region approach with the filter method known from single-objective optimization. Instead of the true objective and constraint functions, so-called fully linear models are employed, and we show how to deal with the gradient inexactness in the composite step setting, adapted from single-objective optimization as well. Under standard assumptions, we prove convergence of a subset of iterates to a quasi-stationary point and if constraint qualifications hold, then the limit point is also a KKT-point of the multi-objective problem
    corecore