34,429 research outputs found

    A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables

    Full text link
    We propose a gradient-based method for quadratic programming problems with a single linear constraint and bounds on the variables. Inspired by the GPCG algorithm for bound-constrained convex quadratic programming [J.J. Mor\'e and G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases until convergence: an identification phase, which performs gradient projection iterations until either a candidate active set is identified or no reasonable progress is made, and an unconstrained minimization phase, which reduces the objective function in a suitable space defined by the identification phase, by applying either the conjugate gradient method or a recently proposed spectral gradient method. However, the algorithm differs from GPCG not only because it deals with a more general class of problems, but mainly for the way it stops the minimization phase. This is based on a comparison between a measure of optimality in the reduced space and a measure of bindingness of the variables that are on the bounds, defined by extending the concept of proportioning, which was proposed by some authors for box-constrained problems. If the objective function is bounded, the algorithm converges to a stationary point thanks to a suitable application of the gradient projection method in the identification phase. For strictly convex problems, the algorithm converges to the optimal solution in a finite number of steps even in case of degeneracy. Extensive numerical experiments show the effectiveness of the proposed approach.Comment: 30 pages, 17 figure

    An Active-Set Algorithmic Framework for Non-Convex Optimization Problems over the Simplex

    Get PDF
    In this paper, we describe a new active-set algorithmic framework for minimizing a non-convex function over the unit simplex. At each iteration, the method makes use of a rule for identifying active variables (i.e., variables that are zero at a stationary point) and specific directions (that we name active-set gradient related directions) satisfying a new "nonorthogonality" type of condition. We prove global convergence to stationary points when using an Armijo line search in the given framework. We further describe three different examples of active-set gradient related directions that guarantee linear convergence rate (under suitable assumptions). Finally, we report numerical experiments showing the effectiveness of the approach.Comment: 29 pages, 3 figure

    Solution of Optimal Power Flow Problems using Moment Relaxations Augmented with Objective Function Penalization

    Full text link
    The optimal power flow (OPF) problem minimizes the operating cost of an electric power system. Applications of convex relaxation techniques to the non-convex OPF problem have been of recent interest, including work using the Lasserre hierarchy of "moment" relaxations to globally solve many OPF problems. By preprocessing the network model to eliminate low-impedance lines, this paper demonstrates the capability of the moment relaxations to globally solve large OPF problems that minimize active power losses for portions of several European power systems. Large problems with more general objective functions have thus far been computationally intractable for current formulations of the moment relaxations. To overcome this limitation, this paper proposes the combination of an objective function penalization with the moment relaxations. This combination yields feasible points with objective function values that are close to the global optimum of several large OPF problems. Compared to an existing penalization method, the combination of penalization and the moment relaxations eliminates the need to specify one of the penalty parameters and solves a broader class of problems.Comment: 8 pages, 1 figure, to appear in IEEE 54th Annual Conference on Decision and Control (CDC), 15-18 December 201

    A second derivative SQP method: theoretical issues

    Get PDF
    Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a second-derivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent-constraint is imposed on certain QP subproblems, which “guides” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established
    • …
    corecore