11 research outputs found
A Superlinear Convergence Framework for Kurdyka-{\L}ojasiewicz Optimization
This work extends the iterative framework proposed by Attouch et al. (in
Math. Program. 137: 91-129, 2013) for minimizing a nonconvex and nonsmooth
function so that the generated sequence possesses a Q-superlinear
convergence rate. This framework consists of a monotone decrease condition, a
relative error condition and a continuity condition, and the first two
conditions both involve a parameter . We justify that any sequence
conforming to this framework is globally convergent when is a
Kurdyka-{\L}ojasiewicz (KL) function, and the convergence has a Q-superlinear
rate of order when is a KL function of exponent
. Then, we illustrate that the iterate sequence
generated by an inexact -order regularization method for composite
optimization problems with a nonconvex and nonsmooth term belongs to this
framework, and consequently, first achieve the Q-superlinear convergence rate
of order for an inexact cubic regularization method to solve this class
of composite problems with KL property of exponent
Variance Reduced Random Relaxed Projection Method for Constrained Finite-sum Minimization Problems
For many applications in signal processing and machine learning, we are
tasked with minimizing a large sum of convex functions subject to a large
number of convex constraints. In this paper, we devise a new random projection
method (RPM) to efficiently solve this problem. Compared with existing RPMs,
our proposed algorithm features two useful algorithmic ideas. First, at each
iteration, instead of projecting onto the subset defined by one of the
constraints, our algorithm only requires projecting onto a half-space
approximation of the subset, which significantly reduces the computational cost
as it admits a closed-form formula. Second, to exploit the structure that the
objective is a sum, variance reduction is incorporated into our algorithm to
further improve the performance. As theoretical contributions, under an error
bound condition and other standard assumptions, we prove that the proposed RPM
converges to an optimal solution and that both optimality and feasibility gaps
vanish at a sublinear rate. We also provide sufficient conditions for the error
bound condition to hold. Experiments on a beamforming problem and a robust
classification problem are also presented to demonstrate the superiority of our
RPM over existing ones
An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
We propose a first-order method to solve the cubic regularization subproblem
(CRS) based on a novel reformulation. The reformulation is a constrained convex
optimization problem whose feasible region admits an easily computable
projection. Our reformulation requires computing the minimum eigenvalue of the
Hessian. To avoid the expensive computation of the exact minimum eigenvalue, we
develop a surrogate problem to the reformulation where the exact minimum
eigenvalue is replaced with an approximate one. We then apply first-order
methods such as the Nesterov's accelerated projected gradient method (APG) and
projected Barzilai-Borwein method to solve the surrogate problem. As our main
theoretical contribution, we show that when an -approximate minimum
eigenvalue is computed by the Lanczos method and the surrogate problem is
approximately solved by APG, our approach returns an -approximate
solution to CRS in matrix-vector multiplications
(where hides the logarithmic factors). Numerical experiments
show that our methods are comparable to and outperform the Krylov subspace
method in the easy and hard cases, respectively. We further implement our
methods as subproblem solvers of adaptive cubic regularization methods, and
numerical results show that our algorithms are comparable to the
state-of-the-art algorithms
Duality-based Higher-order Non-smooth Optimization on Manifolds
We propose a method for solving non-smooth optimization problems on
manifolds. In order to obtain superlinear convergence, we apply a Riemannian
Semi-smooth Newton method to a non-smooth non-linear primal-dual optimality
system based on a recent extension of Fenchel duality theory to Riemannian
manifolds. We also propose an inexact version of the Riemannian Semi-smooth
Newton method and prove conditions for local linear and superlinear
convergence. Numerical experiments on l2-TV-like problems confirm superlinear
convergence on manifolds with positive and negative curvature