1,043 research outputs found

    Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation

    Get PDF
    We provide new tools for worst-case performance analysis of the gradient (or steepest descent) method of Cauchy for smooth strongly convex functions, and Newton's method for self-concordant functions, including the case of inexact search directions. The analysis uses semidefinite programming performance estimation, as pioneered by Drori and Teboulle [Mathematical Programming, 145(1-2):451-482, 2014], and extends recent performance estimation results for the method of Cauchy by the authors [Optimization Letters, 11(7), 1185-1199, 2017]. To illustrate the applicability of the tools, we demonstrate a novel complexity analysis of short step interior point methods using inexact search directions. As an example in this framework, we sketch how to give a rigorous worst-case complexity analysis of a recent interior point method by Abernethy and Hazan [PMLR, 48:2520-2528, 2016].Comment: 22 pages, 1 figure. Title of earlier version was "Worst-case convergence analysis of gradient and Newton methods through semidefinite programming performance estimation

    An interior-point method for the single-facility location problem with mixed norms using a conic formulation

    Get PDF
    Abstract We consider the single-facility location problem with mixed norms, i.e. the problem of minimizing the sum of the distances from a point to a set of fixed points in R n , where each distance can be measured according to a different p-norm. We show how this problem can be expressed into a structured conic format by decomposing the nonlinear components of the objective into a series of constraints involving three-dimensional cones. Using the availability of a self-concordant barrier for these cones, we present a polynomial-time algorithm (a long-step path-following interior-point scheme) to solve the problem up to a given accuracy. Finally, we report computational results for this algorithm and compare with standard nonlinear optimization solvers applied to this problem

    Convex Optimization: Algorithms and Complexity

    Full text link
    This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.Comment: A previous version of the manuscript was titled "Theory of Convex Optimization for Machine Learning

    Efficient algorithms for solving the p-Laplacian in polynomial time

    Get PDF
    The pp-Laplacian is a nonlinear partial differential equation, parametrized by p[1,]p \in [1,\infty]. We provide new numerical algorithms, based on the barrier method, for solving the pp-Laplacian numerically in O(nlogn)O(\sqrt{n}\log n) Newton iterations for all p[1,]p \in [1,\infty], where nn is the number of grid points. We confirm our estimates with numerical experiments.Comment: 28 pages, 3 figure
    corecore