239 research outputs found

    Convex optimization over intersection of simple sets: improved convergence rate guarantees via an exact penalty approach

    Full text link
    We consider the problem of minimizing a convex function over the intersection of finitely many simple sets which are easy to project onto. This is an important problem arising in various domains such as machine learning. The main difficulty lies in finding the projection of a point in the intersection of many sets. Existing approaches yield an infeasible point with an iteration-complexity of O(1/Δ2)O(1/\varepsilon^2) for nonsmooth problems with no guarantees on the in-feasibility. By reformulating the problem through exact penalty functions, we derive first-order algorithms which not only guarantees that the distance to the intersection is small but also improve the complexity to O(1/Δ)O(1/\varepsilon) and O(1/Δ)O(1/\sqrt{\varepsilon}) for smooth functions. For composite and smooth problems, this is achieved through a saddle-point reformulation where the proximal operators required by the primal-dual algorithms can be computed in closed form. We illustrate the benefits of our approach on a graph transduction problem and on graph matching

    Mirror Prox Algorithm for Multi-Term Composite Minimization and Semi-Separable Problems

    Full text link
    In the paper, we develop a composite version of Mirror Prox algorithm for solving convex-concave saddle point problems and monotone variational inequalities of special structure, allowing to cover saddle point/variational analogies of what is usually called "composite minimization" (minimizing a sum of an easy-to-handle nonsmooth and a general-type smooth convex functions "as if" there were no nonsmooth component at all). We demonstrate that the composite Mirror Prox inherits the favourable (and unimprovable already in the large-scale bilinear saddle point case) O(1/Ï”)O(1/\epsilon) efficiency estimate of its prototype. We demonstrate that the proposed approach can be naturally applied to Lasso-type problems with several penalizing terms (e.g. acting together ℓ1\ell_1 and nuclear norm regularization) and to problems of the structure considered in the alternating directions methods, implying in both cases methods with the O(ϔ−1)O(\epsilon^{-1}) complexity bounds

    Multilevel optimisation for computer vision

    Get PDF
    The recent spark in machine learning and computer vision methods requiring increasingly larger datasets has motivated the introduction of optimisation algorithms specifically tailored to solve very large problems within practical time constraints. This demand in algorithms challenges the practicability of state of the art methods requiring new approaches that can take advantage of not only the problem’s mathematical structure, but also its data structure. Fortunately, such structure is present in many computer vision applications, where the problems can be modelled with varying degrees of fidelity. This structure suggests using multiscale models and thus multilevel algorithms. The objective of this thesis is to develop, implement and test provably convergent multilevel optimisation algorithms for convex composite optimisation problems in general and its applications in computer vision in particular. Our first multilevel algorithm solves convex composite optimisation problem and it is most efficient particularly for the robust facial recognition task. The method uses concepts from proximal gradient, mirror descent and multilevel optimisation algorithms, thus we call it multilevel accelerated gradient mirror descent algorithm (MAGMA). We first show that MAGMA has the same theoretical convergence rate as the state of the art first order methods and has much lower per iteration complexity. Then we demonstrate its practical advantage on many facial recognition problems. The second part of the thesis introduces new multilevel procedure most appropriate for the robust PCA problems requiring iterative SVD computations. We propose to exploit the multiscale structure present in these problems by constructing lower dimensional matrices and use its singular values for each iteration of the optimisation procedure. We implement this approach on three different optimisation algorithms - inexact ALM, Frank-Wolfe Thresholding and non-convex alternating projections. In this case as well we show that these multilevel algorithms converge (to an exact or approximate) solution with the same convergence rate as their standard counterparts and test all three methods on numerous synthetic and real life problems demonstrating that the multilevel algorithms are not only much faster, but also solve problems that often cannot be solved by their standard counterparts.Open Acces

    Numerical splitting methods for nonsmooth convex optimization problems

    Get PDF
    In this thesis, we develop and investigate numerical methods for solving nonsmooth convex optimization problems in real Hilbert spaces. We construct algorithms, such that they handle the terms in the objective function and constraints of the minimization problems separately, which makes these methods simpler to compute. In the first part of the thesis, we extend the well known AMA method from Tseng to the Proximal AMA algorithm by introducing variable metrics in the subproblems of the primal-dual algorithm. For a special choice of metrics, the subproblems become proximal steps. Thus, for objectives in a lot of important applications, such as signal and image processing, machine learning or statistics, the iteration process consists of expressions in closed form that are easy to calculate. In the further course of the thesis, we intensify the investigation on this algorithm by considering and studying a dynamical system. Through explicit time discretization of this system, we obtain Proximal AMA. We show the existence and uniqueness of strong global solutions of the dynamical system and prove that its trajectories converge to the primal-dual solution of the considered optimization problem. In the last part of this thesis, we minimize a sum of finitely many nonsmooth convex functions (each can be composed by a linear operator) over a nonempty, closed and convex set by smoothing these functions. We consider a stochastic algorithm in which we take gradient steps of the smoothed functions (which are proximal steps if we smooth by Moreau envelope), and use a mirror map to 'mirror'' the iterates onto the feasible set. In applications, we compare them to similar methods and discuss the advantages and practical usability of these new algorithms

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted ℓ2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view
    • 

    corecore