4 research outputs found
LogSpecT: Feasible Graph Learning Model from Stationary Signals with Recovery Guarantees
Graph learning from signals is a core task in Graph Signal Processing (GSP).
One of the most commonly used models to learn graphs from stationary signals is
SpecT. However, its practical formulation rSpecT is known to be sensitive to
hyperparameter selection and, even worse, to suffer from infeasibility. In this
paper, we give the first condition that guarantees the infeasibility of rSpecT
and design a novel model (LogSpecT) and its practical formulation (rLogSpecT)
to overcome this issue. Contrary to rSpecT, the novel practical model rLogSpecT
is always feasible. Furthermore, we provide recovery guarantees of rLogSpecT,
which are derived from modern optimization tools related to epi-convergence.
These tools could be of independent interest and significant for various
learning problems. To demonstrate the advantages of rLogSpecT in practice, a
highly efficient algorithm based on the linearized alternating direction method
of multipliers (L-ADMM) is proposed. The subproblems of L-ADMM admit
closed-form solutions and the convergence is guaranteed. Extensive numerical
results on both synthetic and real networks corroborate the stability and
superiority of our proposed methods, underscoring their potential for various
graph learning applications
Rotation Group Synchronization via Quotient Manifold
Rotation group synchronization is an important inverse
problem and has attracted intense attention from numerous application fields
such as graph realization, computer vision, and robotics. In this paper, we
focus on the least-squares estimator of rotation group synchronization with
general additive noise models, which is a nonconvex optimization problem with
manifold constraints. Unlike the phase/orthogonal group synchronization, there
are limited provable approaches for solving rotation group synchronization.
First, we derive improved estimation results of the least-squares/spectral
estimator, illustrating the tightness and validating the existing relaxation
methods of solving rotation group synchronization through the optimum of
relaxed orthogonal group version under near-optimal noise level for exact
recovery. Moreover, departing from the standard approach of utilizing the
geometry of the ambient Euclidean space, we adopt an intrinsic Riemannian
approach to study orthogonal/rotation group synchronization. Benefiting from a
quotient geometric view, we prove the positive definite condition of quotient
Riemannian Hessian around the optimum of orthogonal group synchronization
problem, and consequently the Riemannian local error bound property is
established to analyze the convergence rate properties of various Riemannian
algorithms. As a simple and feasible method, the sequential convergence
guarantee of the (quotient) Riemannian gradient method for solving
orthogonal/rotation group synchronization problem is studied, and we derive its
global linear convergence rate to the optimum with the spectral initialization.
All results are deterministic without any probabilistic model
Nonsmooth Composite Nonconvex-Concave Minimax Optimization
Nonconvex-concave minimax optimization has received intense interest in
machine learning, including learning with robustness to data distribution,
learning with non-decomposable loss, adversarial learning, to name a few.
Nevertheless, most existing works focus on the gradient-descent-ascent (GDA)
variants that can only be applied in smooth settings. In this paper, we
consider a family of minimax problems whose objective function enjoys the
nonsmooth composite structure in the variable of minimization and is concave in
the variables of maximization. By fully exploiting the composite structure, we
propose a smoothed proximal linear descent ascent (\textit{smoothed} PLDA)
algorithm and further establish its iteration
complexity, which matches that of smoothed GDA~\cite{zhang2020single} under
smooth settings. Moreover, under the mild assumption that the objective
function satisfies the one-sided Kurdyka-\L{}ojasiewicz condition with exponent
, we can further improve the iteration complexity to
. To the best of our knowledge,
this is the first provably efficient algorithm for nonsmooth nonconvex-concave
problems that can achieve the optimal iteration complexity
if . As a byproduct, we
discuss different stationarity concepts and clarify their relationships
quantitatively, which could be of independent interest. Empirically, we
illustrate the effectiveness of the proposed smoothed PLDA in variation
regularized Wasserstein distributionally robust optimization problems
Doubly Smoothed GDA: Global Convergent Algorithm for Constrained Nonconvex-Nonconcave Minimax Optimization
Nonconvex-nonconcave minimax optimization has received intense attention over
the last decade due to its broad applications in machine learning.
Unfortunately, most existing algorithms cannot be guaranteed to converge
globally and even suffer from limit cycles. To address this issue, we propose a
novel single-loop algorithm called doubly smoothed gradient descent ascent
method (DSGDA), which naturally balances the primal and dual updates. The
proposed DSGDA can get rid of limit cycles in various challenging
nonconvex-nonconcave examples in the literature, including Forsaken,
Bilinearly-coupled minimax, Sixth-order polynomial, and PolarGame. We further
show that under an one-sided Kurdyka-\L{}ojasiewicz condition with exponent
(resp. convex primal/concave dual function), DSGDA can find a
game-stationary point with an iteration complexity of
(resp.
). These match the best results for single-loop
algorithms that solve nonconvex-concave or convex-nonconcave minimax problems,
or problems satisfying the rather restrictive one-sided Polyak-\L{}ojasiewicz
condition. Our work demonstrates, for the first time, the possibility of having
a simple and unified single-loop algorithm for solving nonconvex-nonconcave,
nonconvex-concave, and convex-nonconcave minimax problems