876 research outputs found

    Approximate Convex Optimization by Online Game Playing

    Full text link
    Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ϵ\epsilon approximate solution is proportional to 1ϵ2\frac{1}{\epsilon^2}. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in 1ϵ\frac{1}{\epsilon} iterations. The latter algorithm requires to solve a convex quadratic program every iteration - an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to 1ϵ\frac{1}{\epsilon}. The algorithm does NOT require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar's result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest

    Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls

    Full text link
    We propose a rank-kk variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball. Our algorithm replaces the top singular-vector computation (11-SVD) in Frank-Wolfe with a top-kk singular-vector computation (kk-SVD), which can be done by repeatedly applying 11-SVD kk times. Alternatively, our algorithm can be viewed as a rank-kk restricted version of projected gradient descent. We show that our algorithm has a linear convergence rate when the objective function is smooth and strongly convex, and the optimal solution has rank at most kk. This improves the convergence rate and the total time complexity of the Frank-Wolfe method and its variants.Comment: In NIPS 201

    On the solution existence and stability of polynomial optimization problems

    Full text link
    This paper introduces and investigates a regularity condition in the asymptotic sense for the optimization problems whose objective functions are polynomial. We prove two sufficient conditions for the existence of solutions for polynomial optimization problems. Further, when the constraint sets are semi-algebraic, we show results on the stability of the solution map of polynomial optimization problems. At the end of the paper, we discuss the genericity of the regularity condition.Comment: The old title: A regularity condition in polynomial optimizatio
    corecore