711 research outputs found
An Inexact Frank-Wolfe Algorithm for Composite Convex Optimization Involving a Self-Concordant Function
In this paper, we consider Frank-Wolfe-based algorithms for composite convex
optimization problems with objective involving a logarithmically-homogeneous,
self-concordant functions. Recent Frank-Wolfe-based methods for this class of
problems assume an oracle that returns exact solutions of a linearized
subproblem. We relax this assumption and propose a variant of the Frank-Wolfe
method with inexact oracle for this class of problems. We show that our inexact
variant enjoys similar convergence guarantees to the exact case, while allowing
considerably more flexibility in approximately solving the linearized
subproblem. In particular, our approach can be applied if the subproblem can be
solved prespecified additive error or to prespecified relative error (even
though the optimal value of the subproblem may not be uniformly bounded).
Furthermore, our approach can also handle the situation where the subproblem is
solved via a randomized algorithm that fails with positive probability. Our
inexact oracle model is motivated by certain large-scale semidefinite programs
where the subproblem reduces to computing an extreme eigenvalue-eigenvector
pair, and we demonstrate the practical performance of our algorithm with
numerical experiments on problems of this form
Analysis of the Frank-Wolfe Method for Convex Composite Optimization involving a Logarithmically-Homogeneous Barrier
We present and analyze a new generalized Frank-Wolfe method for the composite
optimization problem ,
where is a -logarithmically-homogeneous self-concordant barrier,
is a linear operator and the function has bounded domain but
is possibly non-smooth. We show that our generalized Frank-Wolfe method
requires iterations to produce an -approximate
solution, where denotes the initial optimality gap and is the
variation of on its domain. This result establishes certain intrinsic
connections between -logarithmically homogeneous barriers and the
Frank-Wolfe method. When specialized to the -optimal design problem, we
essentially recover the complexity obtained by Khachiyan using the Frank-Wolfe
method with exact line-search. We also study the (Fenchel) dual problem of
, and we show that our new method is equivalent to an adaptive-step-size
mirror descent method applied to the dual problem. This enables us to provide
iteration complexity bounds for the mirror descent method despite even though
the dual objective function is non-Lipschitz and has unbounded domain. In
addition, we present computational experiments that point to the potential
usefulness of our generalized Frank-Wolfe method on Poisson image de-blurring
problems with TV regularization, and on simulated PET problem instances.Comment: See Version 1 (v1) for the analysis of the Frank-Wolfe method with
adaptive step-size applied to the H\"older smooth function
Computation of Minimum Volume Covering Ellipsoids
We present a practical algorithm for computing the minimum volume n-dimensional ellipsoid that must contain m given points al,...,am C Rn . This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30, 000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer
Recommended from our members
A Framework for Analyzing Stochastic Optimization Algorithms Under Dependence
In this dissertation, a theoretical framework based on concentration inequalities for empirical processes is developed to better design iterative optimization algorithms and analyze their convergence properties in the presence of complex dependence between directions and step-sizes. Based on this framework, we proposed a stochastic away-step Frank-Wolfe algorithm and a stochastic pairwise-step Frank-Wolfe algorithm for solving strongly convex problems with polytope constraints and proved that both of those algorithms converge linearly to the optimal solution in expectation and almost surely. Numerical results showed that the proposed algorithms are faster and more stable than most of their competitors.
This framework can be applied for designing and analyzing stochastic algorithms with adaptive step-sizes that are based on local curvature for self-concordant optimization problems. Notably, we proposed and analyzed a stochastic BFGS algorithm without line-search, and proved that it converges linearly globally and super-linearly locally using the framework mentioned above. This is the first work that analyzes a fully stochastic BFGS algorithm, which also avoids time consuming or even impossible line-search steps.
A third class of problems that the empirical processes framework can be applied to is to study the optimization of compositions of stochastic functions. A multi-level Monte Carlo based unbiased gradient generation method is introduced into stochastic optimization algorithms for minimizing function compositions. Based on this, standard stochastic optimization algorithms can be applied to these problems directly
An Away-Step Frank-Wolfe Method for Minimizing Logarithmically-Homogeneous Barriers
We present and analyze a new away-step Frank-Wolfe method for the convex
optimization problem , where is a -logarithmically-homogeneous
self-concordant barrier, is a linear operator, is a linear function and is a nonempty polytope.
We establish affine-invariant global linear convergence rates for both the
objective gaps and the Frank-Wolfe gaps generated by our method. When
specialized to the D-optimal design problem, our results settle a question left
open since Ahipasaoglu, Sun and Todd (2008). We also show that the iterates
generated by our method will land on a face of in a finite number
of iterations, and hence our method may have improved local linear convergence
rates
Convergence of the Exponentiated Gradient Method with Armijo Line Search
Consider the problem of minimizing a convex differentiable function on the
probability simplex, spectrahedron, or set of quantum density matrices. We
prove that the exponentiated gradient method with Armjo line search always
converges to the optimum, if the sequence of the iterates possesses a strictly
positive limit point (element-wise for the vector case, and with respect to the
Lowner partial ordering for the matrix case). To the best our knowledge, this
is the first convergence result for a mirror descent-type method that only
requires differentiability. The proof exploits self-concordant likeness of the
log-partition function, which is of independent interest.Comment: 18 page
- …