18 research outputs found
Recommended from our members
Operator Splitting Methods for Convex and Nonconvex Optimization
This dissertation focuses on a family of optimization methods called operator splitting methods. They solve complicated problems by decomposing the problem structure into simpler pieces and make progress on each of them separately. Over the past two decades, there has been a resurgence of interests in these methods as the demand for solving structured large-scale problems grew. One of the major challenges for splitting methods is their sensitivity to ill-conditioning, which often makes them struggle to achieve a high order of accuracy. Furthermore, their classical analyses are restricted to the nice settings where solutions do exist, and everything is convex. Much less is known when either of these assumptions breaks down.This work aims to address the issues above. Specifically, we propose a novel acceleration technique called inexact preconditioning, which exploits second-order information at relatively low computation cost. We also show that certain splitting methods still work on problems without solutions, in the sense that their iterates provide information on what goes wrong and how to fix. Finally, for nonconvex problems with saddle points, we show that almost surely, splitting methods will only converge to the local minimums under certain assumptions
Coordinate-Update Algorithms can Efficiently Detect Infeasible Optimization Problems
Coordinate update/descent algorithms are widely used in large-scale
optimization due to their low per-iteration cost and scalability, but their
behavior on infeasible or misspecified problems has not been much studied
compared to the algorithms that use full updates. For coordinate-update methods
to be as widely adopted to the extent so that they can be used as engines of
general-purpose solvers, it is necessary to also understand their behavior
under pathological problem instances. In this work, we show that the normalized
iterates of randomized coordinate-update fixed-point iterations (RC-FPI)
converge to the infimal displacement vector and use this result to design an
efficient infeasibility detection method. We then extend the analysis to the
setup where the coordinates are defined by non-orthonormal basis using the
Friedrichs angle and then apply the machinery to decentralized optimization
problems
Accelerated Infeasibility Detection of Constrained Optimization and Fixed-Point Iterations
As first-order optimization methods become the method of choice for solving
large-scale optimization problems, optimization solvers based on first-order
algorithms are being built. Such general-purpose solvers must robustly detect
infeasible or misspecified problem instances, but the computational complexity
of first-order methods for doing so has yet to be formally studied. In this
work, we characterize the optimal accelerated rate of infeasibility detection.
We show that the standard fixed-point iteration achieves a
and rates, respectively, on the normalized iterates and the
fixed-point residual converging to the infimal displacement vector, while the
accelerated fixed-point iteration achieves and
rates. We then provide a matching complexity lower
bound to establish that is indeed the optimal accelerated rate