711 research outputs found

    An Inexact Frank-Wolfe Algorithm for Composite Convex Optimization Involving a Self-Concordant Function

    Full text link
    In this paper, we consider Frank-Wolfe-based algorithms for composite convex optimization problems with objective involving a logarithmically-homogeneous, self-concordant functions. Recent Frank-Wolfe-based methods for this class of problems assume an oracle that returns exact solutions of a linearized subproblem. We relax this assumption and propose a variant of the Frank-Wolfe method with inexact oracle for this class of problems. We show that our inexact variant enjoys similar convergence guarantees to the exact case, while allowing considerably more flexibility in approximately solving the linearized subproblem. In particular, our approach can be applied if the subproblem can be solved prespecified additive error or to prespecified relative error (even though the optimal value of the subproblem may not be uniformly bounded). Furthermore, our approach can also handle the situation where the subproblem is solved via a randomized algorithm that fails with positive probability. Our inexact oracle model is motivated by certain large-scale semidefinite programs where the subproblem reduces to computing an extreme eigenvalue-eigenvector pair, and we demonstrate the practical performance of our algorithm with numerical experiments on problems of this form

    Analysis of the Frank-Wolfe Method for Convex Composite Optimization involving a Logarithmically-Homogeneous Barrier

    Full text link
    We present and analyze a new generalized Frank-Wolfe method for the composite optimization problem (P):minxRn  f(Ax)+h(x)(P):{\min}_{x\in\mathbb{R}^n}\; f(\mathsf{A} x) + h(x), where ff is a θ\theta-logarithmically-homogeneous self-concordant barrier, A\mathsf{A} is a linear operator and the function hh has bounded domain but is possibly non-smooth. We show that our generalized Frank-Wolfe method requires O((δ0+θ+Rh)ln(δ0)+(θ+Rh)2/ε)O((\delta_0 + \theta + R_h)\ln(\delta_0) + (\theta + R_h)^2/\varepsilon) iterations to produce an ε\varepsilon-approximate solution, where δ0\delta_0 denotes the initial optimality gap and RhR_h is the variation of hh on its domain. This result establishes certain intrinsic connections between θ\theta-logarithmically homogeneous barriers and the Frank-Wolfe method. When specialized to the DD-optimal design problem, we essentially recover the complexity obtained by Khachiyan using the Frank-Wolfe method with exact line-search. We also study the (Fenchel) dual problem of (P)(P), and we show that our new method is equivalent to an adaptive-step-size mirror descent method applied to the dual problem. This enables us to provide iteration complexity bounds for the mirror descent method despite even though the dual objective function is non-Lipschitz and has unbounded domain. In addition, we present computational experiments that point to the potential usefulness of our generalized Frank-Wolfe method on Poisson image de-blurring problems with TV regularization, and on simulated PET problem instances.Comment: See Version 1 (v1) for the analysis of the Frank-Wolfe method with adaptive step-size applied to the H\"older smooth function

    Computation of Minimum Volume Covering Ellipsoids

    Get PDF
    We present a practical algorithm for computing the minimum volume n-dimensional ellipsoid that must contain m given points al,...,am C Rn . This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30, 000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer

    An Away-Step Frank-Wolfe Method for Minimizing Logarithmically-Homogeneous Barriers

    Full text link
    We present and analyze a new away-step Frank-Wolfe method for the convex optimization problem minxX  f(Ax)+c,x\min_{x\in\mathcal{X}} \; f(\mathsf{A} x) + \langle c,x\rangle, where ff is a θ\theta-logarithmically-homogeneous self-concordant barrier, A\mathsf{A} is a linear operator, c,\langle c,\cdot\rangle is a linear function and X\mathcal{X} is a nonempty polytope. We establish affine-invariant global linear convergence rates for both the objective gaps and the Frank-Wolfe gaps generated by our method. When specialized to the D-optimal design problem, our results settle a question left open since Ahipasaoglu, Sun and Todd (2008). We also show that the iterates generated by our method will land on a face of X\mathcal{X} in a finite number of iterations, and hence our method may have improved local linear convergence rates

    Convergence of the Exponentiated Gradient Method with Armijo Line Search

    Get PDF
    Consider the problem of minimizing a convex differentiable function on the probability simplex, spectrahedron, or set of quantum density matrices. We prove that the exponentiated gradient method with Armjo line search always converges to the optimum, if the sequence of the iterates possesses a strictly positive limit point (element-wise for the vector case, and with respect to the Lowner partial ordering for the matrix case). To the best our knowledge, this is the first convergence result for a mirror descent-type method that only requires differentiability. The proof exploits self-concordant likeness of the log-partition function, which is of independent interest.Comment: 18 page
    corecore