196 research outputs found

    Nonlinear programming without a penalty function or a filter

    Get PDF
    A new method is introduced for solving equality constrained nonlinear optimization problems. This method does not use a penalty function, nor a barrier or a filter, and yet can be proved to be globally convergent to first-order stationary points. It uses different trust-regions to cope with the nonlinearities of the objective function and the constraints, and allows inexact SQP steps that do not lie exactly in the nullspace of the local Jacobian. Preliminary numerical experiments on CUTEr problems indicate that the method performs well

    A new double trust regions SQP method without a penalty function or a filter

    Full text link

    A globally convergent filter-trust-region method for large deformation contact problems

    Get PDF
    We present a globally convergent method for the solution of frictionless large deformation contact problems for hyperelastic materials. The discretization uses the mortar method which is known to be more stable than node-to-segment approaches. The resulting nonconvex constrained minimization problems are solved using a filter--trust-region scheme, and we prove global convergence towards first-order optimal points. The constrained Newton problems are solved robustly and efficiently using a truncated nonsmooth Newton multigrid method with a monotone multigrid linear correction step. For this we introduce a cheap basis transformation that decouples the contact constraints. Numerical experiments confirm the stability and efficiency of our approach

    A filter algorithm : comparison with NLP solvers

    Get PDF
    Versão não definitiva do artigoThe purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.Fundação para a Ciência e a Tecnologia (FCT

    A Filter Algorithm with Inexact Line Search

    Get PDF
    A filter algorithm with inexact line search is proposed for solving nonlinear programming problems. The filter is constructed by employing the norm of the gradient of the Lagrangian function to the infeasibility measure. Transition to superlinear local convergence is showed for the proposed filter algorithm without second-order correction. Under mild conditions, the global convergence can also be derived. Numerical experiments show the efficiency of the algorithm

    Combining filter method and dynamically dimensioned search for constrained global optimization

    Get PDF
    In this work we present an algorithm that combines the filter technique and the dynamically dimensioned search (DDS) for solving nonlinear and nonconvex constrained global optimization problems. The DDS is a stochastic global algorithm for solving bound constrained problems that in each iteration generates a randomly trial point perturbing some coordinates of the current best point. The filter technique controls the progress related to optimality and feasibility defining a forbidden region of points refused by the algorithm. This region can be given by the flat or slanting filter rule. The proposed algorithm does not compute or approximate any derivatives of the objective and constraint functions. Preliminary experiments show that the proposed algorithm gives competitive results when compared with other methods.The first author thanks a scholarship supported by the International Cooperation Program CAPES/ COFECUB at the University of Minho. The second and third authors thanks the support given by FCT (Funda¸c˜ao para Ciˆencia e Tecnologia, Portugal) in the scope of the projects: UID/MAT/00013/2013 and UID/CEC/00319/2013. The fourth author was partially supported by CNPq-Brazil grants 308957/2014-8 and 401288/2014-5.info:eu-repo/semantics/publishedVersio

    A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems

    Full text link
    In this paper, a robust sequential quadratic programming method of [1] for constrained optimization is generalized to problem with stochastic objective function, deterministic equality and inequality constraints. A stochastic line search scheme in [2] is employed to globalize the steps. We show that in the case where the algorithm fails to terminate in finite number of iterations, the sequence of iterates will converge almost surely to a Karush-Kuhn-Tucker point under the assumption of extended Mangasarian-Fromowitz constraint qualification. We also show that, with a specific sampling method, the probability of the penalty parameter approaching infinity is 0. Encouraging numerical results are reported
    • …
    corecore