592 research outputs found
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
We study Frank-Wolfe methods for nonconvex stochastic and finite-sum
optimization problems. Frank-Wolfe methods (in the convex case) have gained
tremendous recent interest in machine learning and optimization communities due
to their projection-free property and their ability to exploit structured
constraints. However, our understanding of these algorithms in the nonconvex
setting is fairly limited. In this paper, we propose nonconvex stochastic
Frank-Wolfe methods and analyze their convergence properties. For objective
functions that decompose into a finite-sum, we leverage ideas from variance
reduction techniques for convex optimization to obtain new variance reduced
nonconvex Frank-Wolfe methods that have provably faster convergence than the
classical Frank-Wolfe method. Finally, we show that the faster convergence
rates of our variance reduced methods also translate into improved convergence
rates for the stochastic setting
Projection-free nonconvex stochastic optimization on Riemannian manifolds
We study stochastic projection-free methods for constrained optimization of
smooth functions on Riemannian manifolds, i.e., with additional constraints
beyond the parameter domain being a manifold. Specifically, we introduce
stochastic Riemannian Frank-Wolfe methods for nonconvex and geodesically convex
problems. We present algorithms for both purely stochastic optimization and
finite-sum problems. For the latter, we develop variance-reduced methods,
including a Riemannian adaptation of the recently proposed Spider technique.
For all settings, we recover convergence rates that are comparable to the
best-known rates for their Euclidean counterparts. Finally, we discuss
applications to two classic tasks: The computation of the Karcher mean of
positive definite matrices and Wasserstein barycenters for multivariate normal
distributions. For both tasks, stochastic Fw methods yield state-of-the-art
empirical performance.Comment: Under Revie
Riemannian Optimization via Frank-Wolfe Methods
We study projection-free methods for constrained Riemannian optimization. In
particular, we propose the Riemannian Frank-Wolfe (RFW) method. We analyze
non-asymptotic convergence rates of RFW to an optimum for (geodesically) convex
problems, and to a critical point for nonconvex objectives. We also present a
practical setting under which RFW can attain a linear convergence rate. As a
concrete example, we specialize Rfw to the manifold of positive definite
matrices and apply it to two tasks: (i) computing the matrix geometric mean
(Riemannian centroid); and (ii) computing the Bures-Wasserstein barycenter.
Both tasks involve geodesically convex interval constraints, for which we show
that the Riemannian "linear oracle" required by RFW admits a closed-form
solution; this result may be of independent interest. We further specialize RFW
to the special orthogonal group and show that here too, the Riemannian "linear
oracle" can be solved in closed form. Here, we describe an application to the
synchronization of data matrices (Procrustes problem). We complement our
theoretical results with an empirical comparison of Rfw against
state-of-the-art Riemannian optimization methods and observe that RFW performs
competitively on the task of computing Riemannian centroids.Comment: Under Review. Largely revised version, including an extended
experimental section and an application to the special orthogonal group and
the Procrustes proble
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
- …