5 research outputs found
Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression
Linear regression in -norm is a canonical optimization problem that
arises in several applications, including sparse recovery, semi-supervised
learning, and signal processing. Generic convex optimization algorithms for
solving -regression are slow in practice. Iteratively Reweighted Least
Squares (IRLS) is an easy to implement family of algorithms for solving these
problems that has been studied for over 50 years. However, these algorithms
often diverge for p > 3, and since the work of Osborne (1985), it has been an
open problem whether there is an IRLS algorithm that is guaranteed to converge
rapidly for p > 3. We propose p-IRLS, the first IRLS algorithm that provably
converges geometrically for any Our algorithm is simple to
implement and is guaranteed to find a -approximate solution in
iterations. Our experiments demonstrate that it
performs even better than our theoretical bounds, beats the standard Matlab/CVX
implementation for solving these problems by 10--50x, and is the fastest among
available implementations in the high-accuracy regime.Comment: Code for this work is available at
https://github.com/utoronto-theory/pIRL
Faster p-norm minimizing flows, via smoothed q-norm problems
We present faster high-accuracy algorithms for computing -norm
minimizing flows. On a graph with edges, our algorithm can compute a
-approximate unweighted -norm minimizing flow
with operations, for any giving the best
bound for all Combined with the algorithm from the work of
Adil et al. (SODA '19), we can now compute such flows for any in time at most In comparison, the previous best
running time was for large constant For
our algorithm computes a -approximate
maximum flow on undirected graphs using operations,
matching the current best bound, albeit only for unit-capacity graphs.
We also give an algorithm for solving general -norm regression
problems for large Our algorithm makes
calls to a linear solver. This
gives the first high-accuracy algorithm for computing weighted -norm
minimizing flows that runs in time for some Our
key technical contribution is to show that smoothed -norm problems
introduced by Adil et al., are interreducible for different values of No
such reduction is known for standard -norm problems.Comment: ACM-SIAM Symposium on Discrete Algorithms (SODA 2020
Iteratively Reweighted Least Squares for Basis Pursuit with Global Linear Convergence Rate
The recovery of sparse data is at the core of many applications in machine
learning and signal processing. While such problems can be tackled using
-regularization as in the LASSO estimator and in the Basis Pursuit
approach, specialized algorithms are typically required to solve the
corresponding high-dimensional non-smooth optimization for large instances.
Iteratively Reweighted Least Squares (IRLS) is a widely used algorithm for this
purpose due its excellent numerical performance. However, while existing theory
is able to guarantee convergence of this algorithm to the minimizer, it does
not provide a global convergence rate. In this paper, we prove that a variant
of IRLS converges with a global linear rate to a sparse solution, i.e., with a
linear error decrease occurring immediately from any initialization, if the
measurements fulfill the usual null space property assumption. We support our
theory by numerical experiments showing that our linear rate captures the
correct dimension dependence. We anticipate that our theoretical findings will
lead to new insights for many other use cases of the IRLS algorithm, such as in
low-rank matrix recovery.Comment: 26 pages, 3 figure
Acceleration with a Ball Optimization Oracle
Consider an oracle which takes a point and returns the minimizer of a
convex function in an ball of radius around . It is
straightforward to show that roughly calls to
the oracle suffice to find an -approximate minimizer of in an
unit ball. Perhaps surprisingly, this is not optimal: we design an
accelerated algorithm which attains an -approximate minimizer with
roughly oracle queries, and give a matching
lower bound. Further, we implement ball optimization oracles for functions with
locally stable Hessians using a variant of Newton's method. The resulting
algorithm applies to a number of problems of practical and theoretical import,
improving upon previous results for logistic and regression and
achieving guarantees comparable to the state-of-the-art for
regression.Comment: 37 page
Almost-linear-time Weighted -norm Solvers in Slightly Dense Graphs via Sparsification
We give almost-linear-time algorithms for constructing sparsifiers with $n\
poly(\log n)(\ell^{2}_2 +
\ell^{p}_p)\ell_p(1+2^{-\text{poly}(\log
n)})\ell_pp(m^{1+o(1)} + n^{4/3 + o(1)})p=\omega(1),m \ge n^{4/3 + o(1)}$)