8,157 research outputs found
Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods
In this paper, we study matrix scaling and balancing, which are fundamental
problems in scientific computing, with a long line of work on them that dates
back to the 1960s. We provide algorithms for both these problems that, ignoring
logarithmic factors involving the dimension of the input matrix and the size of
its entries, both run in time where is the amount of error we are willing to
tolerate. Here, represents the ratio between the largest and the
smallest entries of the optimal scalings. This implies that our algorithms run
in nearly-linear time whenever is quasi-polynomial, which includes, in
particular, the case of strictly positive matrices. We complement our results
by providing a separate algorithm that uses an interior-point method and runs
in time .
In order to establish these results, we develop a new second-order
optimization framework that enables us to treat both problems in a unified and
principled manner. This framework identifies a certain generalization of linear
system solving that we can use to efficiently minimize a broad class of
functions, which we call second-order robust. We then show that in the context
of the specific functions capturing matrix scaling and balancing, we can
leverage and generalize the work on Laplacian system solving to make the
algorithms obtained via this framework very efficient.Comment: To appear in FOCS 201
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
We provide new tools for worst-case performance analysis of the gradient (or
steepest descent) method of Cauchy for smooth strongly convex functions, and
Newton's method for self-concordant functions, including the case of inexact
search directions. The analysis uses semidefinite programming performance
estimation, as pioneered by Drori and Teboulle [Mathematical Programming,
145(1-2):451-482, 2014], and extends recent performance estimation results for
the method of Cauchy by the authors [Optimization Letters, 11(7), 1185-1199,
2017]. To illustrate the applicability of the tools, we demonstrate a novel
complexity analysis of short step interior point methods using inexact search
directions. As an example in this framework, we sketch how to give a rigorous
worst-case complexity analysis of a recent interior point method by Abernethy
and Hazan [PMLR, 48:2520-2528, 2016].Comment: 22 pages, 1 figure. Title of earlier version was "Worst-case
convergence analysis of gradient and Newton methods through semidefinite
programming performance estimation
- …