192,399 research outputs found

    Optimization flow control with Newton-like algorithm

    Get PDF
    We proposed earlier an optimization approach to reactive flow control where the objective of the control is to maximize the aggregate utility of all sources over their transmission rates. The control mechanism is derived as a gradient projection algorithm to solve the dual problem. In this paper we extend the algorithm to a scaled gradient projection. The diagonal scaling matrix approximates the diagonal terms of the Hessian and can be computed at individual links using the same information required by the unscaled algorithm. We prove the convergence of the scaled algorithm and present simulation results that illustrate its superiority to the unscaled algorithm

    Efficient Numerical Methods to Solve Sparse Linear Equations with Application to PageRank

    Full text link
    In this paper, we propose three methods to solve the PageRank problem for the transition matrices with both row and column sparsity. Our methods reduce the PageRank problem to the convex optimization problem over the simplex. The first algorithm is based on the gradient descent in L1 norm instead of the Euclidean one. The second algorithm extends the Frank-Wolfe to support sparse gradient updates. The third algorithm stands for the mirror descent algorithm with a randomized projection. We proof converges rates for these methods for sparse problems as well as numerical experiments support their effectiveness.Comment: 26 page

    A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables

    Get PDF
    We propose a gradient-based method for quadratic programming problems with a single linear constraint and bounds on the variables. Inspired by the GPCG algorithm for bound-constrained convex quadratic programming [J.J. Mor\'e and G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases until convergence: an identification phase, which performs gradient projection iterations until either a candidate active set is identified or no reasonable progress is made, and an unconstrained minimization phase, which reduces the objective function in a suitable space defined by the identification phase, by applying either the conjugate gradient method or a recently proposed spectral gradient method. However, the algorithm differs from GPCG not only because it deals with a more general class of problems, but mainly for the way it stops the minimization phase. This is based on a comparison between a measure of optimality in the reduced space and a measure of bindingness of the variables that are on the bounds, defined by extending the concept of proportioning, which was proposed by some authors for box-constrained problems. If the objective function is bounded, the algorithm converges to a stationary point thanks to a suitable application of the gradient projection method in the identification phase. For strictly convex problems, the algorithm converges to the optimal solution in a finite number of steps even in case of degeneracy. Extensive numerical experiments show the effectiveness of the proposed approach.Comment: 30 pages, 17 figure

    Distributed Random Projection Algorithm for Convex Optimization

    Full text link
    Random projection algorithm is an iterative gradient method with random projections. Such an algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This paper presents a distributed random projection (DRP) algorithm for fully distributed constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set. With reasonable assumptions, we prove that the iterates of all agents converge to the same point in the optimal set almost surely. In addition, we consider a variant of the method that uses a mini-batch of consecutive random projections and establish its convergence in almost sure sense. Experiments on distributed support vector machines demonstrate fast convergence of the algorithm. It actually shows that the number of iteration required until convergence is much smaller than scanning over all training samples just once

    A model-free no-arbitrage price bound for variance options

    Get PDF
    In the framework of Galichon, Henry-Labordère and Touzi, we consider the model-free no-arbitrage bound of variance option given the marginal distributions of the underlying asset. We first make some approximations which restrict the computation on a bounded domain. Then we propose a gradient projection algorithm together with a finite difference scheme to approximate the bound. The general convergence result is obtained. We also provide a numerical example on the variance swap option.Variance option ; model-free price bound ; gradient projection algorithm.
    corecore