67 research outputs found

    A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

    Get PDF
    A scheme---inspired from an old idea due to Mayne and Polak (Math. Prog.,vol.~11, 1976, pp.~67--80)---is proposed for extending to general smoothconstrained optimization problems a previously proposed feasibleinterior-point method for inequality constrained problems.It is shown that the primal-dual interior point framework allows for asignificantly more effective implementation of the Mayne-Polak idea thanthat discussed an analyzed by the originators in the contextof first order methods of feasible direction. Strong global and localconvergence results are proved under mild assumptions. In particular,the proposed algorithm does not suffer the Wachter-Biegler effect

    User's Guide for FSQP Version 2.0 A Fortran Code for Solving Optimization Problems, Possibly Minimax, with General Inequality Constraints and Linear Equality Constraints, Generating Feasible Iterates

    No full text
    FSQP 2.0 is a set of Fortran subroutines for the minimization of the maximum of a set of smooth objective functions (possibly a single one) subject to nonlinear smooth inequality constraints, linear inequality and linear equality constraints, and simple bounds on the variables. If the initial guess provided by the user is infeasible, FSQP first generates a feasible point; subsequently the successive iterates generated by FSQP all satisfy the constraints. The user has the option of requiring that the maximum value among the objective functions decrease at each iteration after feasibility has been reached (monotone line search). He/She must provide subroutines that define the objective functions and constraint functions and may either provide subroutines to compute the gradients of these functions or require that FSQP estimate them by forward finite differences. FSQP 2.0 implements two algorithms based on Sequential Quadratic Programming (SQP), modified so as to generate feasible iterates. In the first one (monotone line search), a certain Armijo type arc search is used with the property that the step of one is eventually accepted, a requirement for superlinear convergence. In the second one the same effect is achieved by means of a (nonmonotone) search along a straight line. The merit function used in both searches is the maximum of a objective functions

    An SQP Algorithm for Finely Discretized Continuous Minimax Problems and Other Minimax Problems with Many Objective Functions

    No full text
    A common strategy for achieving global convergence in the solution of semi-infinite programming (SIP) problems, and in particular of continuous minimax problems, is to (approximately) solve a sequence of discretized problems, with a progressively finer discretization mesh. Finely discretized minimax and SIP problems, as well as other problems with many more objectives/constraints than variables, call for algorithms in which successive search directions are computed based on a small but significant subset of the objectives/constraints, with ensuing reduced computing cost per iteration and decreased risk of numerical difficulties. In this paper, an SQP-type algorithm is proposed that incorporates this idea in the particular case of minimax problems. The general case will be considered in a separate paper. The quadratic programming subproblem that yields the search direction involves only a small subset of the objectives functions. This subset is updated at each iteration in such a way that global convergence is insured. Heuristics are suggested that take advantage of a possible close relationship between ﲡdjacent objective functions. Numerical results demonstrate the efficiency of the proposed algorithm

    Fast Feasible Direction Methods, with Engineering Applications

    No full text
    Optimization problems arising in engineering applications often present distinctive features that are not exploited, or not accounted for, in standard numerical optimization algorithms and software codes. First, in many cases, equality constraints are not present, or can be simply eliminated. Second, there are several instances where it is advantageous, or even crucial, that, once a feasible point has been achieved, all subsequent iterates be feasible as well. Third, many optimization problems arising engineering are best formulated as constrained minimax problems. Fourth, some specifications must be achieved over a range of values of an independent parameter (functional constraints).While various other distinctive features arise in optimization problems found in specific classes of engineering problems, this paper focuses on those identified above, as they have been the object of special attention by the authors and their co-workers in recent years. Specifically, a basic scheme for efficiently tackling inequality constrained optimization while forcing feasible iterates is discussed and various extensions are proposed to handle the distinctive features just pointed out

    A Note on the Positive Definiteness of BFGS Update in Constrained Optimization

    No full text
    This note reviews a few existing methods to maintain the positive definiteness of BFGS in constrained optimization, and their impacts on both global and local convergence. The boundedness of the matrix from above is also briefly addressed. Some new strategies are proposed. Convergence analysis and numerical examples are not included

    On Feasibility, Descent and Superlinear Convergence in Inequality Constrained Optimization.

    No full text
    Extension of quasi-Newton techniques from unconstrained to constrained optimization via Sequential Quadratic Programming (SQP) presents several difficulties. Among these are the possible inconsistency, away from the solution, of first order approximations to the constraints, resulting in infeasibility of the quadratic programs; and the task of selecting a suitable merit function, to induce global convergence. In the case of inequality constrained optimization, both of these difficulties disappear if the algorithm is forced to generate iterates that all satisfy the constraints, and that yield monotonically decreasing objective function values. It has been recently shown that this can be achieved while preserving local superlinear convergence. In this note, the essential ingredients for an SQP- based method exhibiting the desired properties are highlighted. Correspondingly, a class of such algorithms is described and analyzed

    On Robust Eigenvalue Location.

    No full text
    The concepts of guardian and semiguardian maps were recently introduced as tools for assessing robust generalized stability of parametrized families of matrices or polynomials. Necessary and sufficient conditions were obtained for stability of parametrized families with respect to a large class of open subsets of the complex plane, namely those with which one can associate a polynomic guardian or semiguardian map. This note focuses on a class of disconnected subsets of the complex plane, of interest in the context of dominant pole assignment and filter design. It is first observed that the robust stability conditions originally put forth are in fact necessary and aufficient for the number of eigenvalues (matrices) or zeros (polynomials) in any given connected component to the same for all the members of the given family. Polynomic semiguardian maps are then identified for a class of disconnected regions of interest. These maps are in fact "essentially guarding with respect to one-parameter families.

    User's Guide for FSQP Version 3.0c: A FORTRAN Code for Solving Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality and Linear Constraints

    No full text
    FSQP 3.0c is a set of FORTRAN subroutines for the minimization of the maximum of a set of smooth objective functions (possibly a single one) subject to general smooth constraints. If the initial guess provided by the user is infeasible for some inequality constraint or some linear equality constraint, FSQP first generates a feasible point for these constraints; subsequently the successive iterates generated by FSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints (to be satisfied by all iterates) and the maximum of the objective functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. The user has the option of either requiring that the (modified) objective function decrease at each iteration after feasibility for nonlinear inequality and linear constraints has been reached (monotone line search), or requiring a decrease within at most four iterations (nonmonotone line search). He/She must provide subroutines that define the objective functions and constraint functions and may either provide subroutines to compute the gradients of these functions or require that FSQP estimate them by forward finite differences.FSQP 3.0c implements two algorithms based on Sequential Quadratic Programming (SQP), modified so as to generate feasible iterates. In the first one (monotone line search), a certain Armijo type arc search is used with the property that the step of one is eventually accepted, a requirement for superlinear convergence. In the second one the same effect is achieved by means of a (nonmonotone) search along a straight line. The merit function used in both searches is the maximum of the objective functions if there is no nonlinear equality constraint

    A Superlinearly Convergent Method of Feasible Directions for Optimization Problems Arising in the Design of Engineering Systems.

    No full text
    Optimization problems arising from engineering design problems often involve the solution of one or several constrained minimax optimization problems. It is sometimes crucial that all iterates constructed when solving such problems satisfy a given set of 'hard' inequality constraints, and generally desirable that the (maximum) objective function value improve at each iteration. In this paper, we propose an algorithm of the sequential quadratic programming (SQP) type that enjoys such properties. This algorithm is inspired from an algorithm recently proposed for the solution of single objective constrained optimization problems. Preliminary numerical results are very promising

    On Phase Information in Multivariable Systems

    No full text
    The "median phase" and "phase spread" of a matrix are defined and properties are derived. The question of robust stability under uncertainty with phase information is addressed and a corresponding necessary and sufficient condition is given. This condition involves a "phase sensitive singular value". A computable upper bound to this quantity is obtained. The case when the uncertainty is block-structured is also considered
    • …
    corecore