4 research outputs found
A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery
This paper concerns with a noisy structured low-rank matrix recovery problem
which can be modeled as a structured rank minimization problem. We reformulate
this problem as a mathematical program with a generalized complementarity
constraint (MPGCC), and show that its penalty version, yielded by moving the
generalized complementarity constraint to the objective, has the same global
optimal solution set as the MPGCC does whenever the penalty parameter is over a
threshold. Then, by solving the exact penalty problem in an alternating way, we
obtain a multi-stage convex relaxation approach. We provide theoretical
guarantees for our approach under a mild restricted eigenvalue condition, by
quantifying the reduction of the error and approximate rank bounds of the first
stage convex relaxation (which is exactly the nuclear norm relaxation) in the
subsequent stages and establishing the geometric convergence of the error
sequence in a statistical sense. Numerical experiments are conducted for some
structured low-rank matrix recovery examples to confirm our theoretical
findings.Comment: 29 pages, 2 figure
Robust Tensor Completion Using Transformed Tensor SVD
In this paper, we study robust tensor completion by using transformed tensor
singular value decomposition (SVD), which employs unitary transform matrices
instead of discrete Fourier transform matrix that is used in the traditional
tensor SVD. The main motivation is that a lower tubal rank tensor can be
obtained by using other unitary transform matrices than that by using discrete
Fourier transform matrix. This would be more effective for robust tensor
completion. Experimental results for hyperspectral, video and face datasets
have shown that the recovery performance for the robust tensor completion
problem by using transformed tensor SVD is better in PSNR than that by using
Fourier transform and other robust tensor completion methods
Multi-Tubal Rank of Third Order Tensor and Related Low Rank Tensor Completion Problem
Recently, a tensor factorization based method for a low tubal rank tensor
completion problem of a third order tensor was proposed, which performed better
than some existing methods. Tubal rank is only defined on one mode of third
order tensor without low rank structure in the other two modes. That is, low
rank structures on the other two modes are missing. Motivated by this, we first
introduce multi-tubal rank, and then establish a relationship between
multi-tubal rank and Tucker rank. Based on the multi-tubal rank, we propose a
novel low rank tensor completion model. For this model, a tensor factorization
based method is applied and the corresponding convergence anlysis is
established. In addition, spatio-temporal characteristics are intrinsic
features in video and internet traffic tensor data. To get better performance,
we make full use of such features and improve the established tensor completion
model. Then we apply tensor factorization based method for the improved model.
Finally, numerical results are reported on the completion of image, video and
internet traffic data to show the efficiency of our proposed methods. From the
reported numerical results, we can assert that our methods outperform the
existing methods
On the Equivalence of Inexact Proximal ALM and ADMM for a Class of Convex Composite Programming
In this paper, we show that for a class of linearly constrained convex
composite optimization problems, an (inexact) symmetric Gauss-Seidel based
majorized multi-block proximal alternating direction method of multipliers
(ADMM) is equivalent to an {\em inexact} proximal augmented Lagrangian method
(ALM). This equivalence not only provides new perspectives for understanding
some ADMM-type algorithms but also supplies meaningful guidelines on
implementing them to achieve better computational efficiency. Even for the
two-block case, a by-product of this equivalence is the convergence of the
whole sequence generated by the classic ADMM with a step-length that exceeds
the conventional upper bound of , if one part of the objective
is linear. This is exactly the problem setting in which the very first
convergence analysis of ADMM was conducted by Gabay and Mercier in 1976, but,
even under notably stronger assumptions, only the convergence of the primal
sequence was known. A collection of illustrative examples are provided to
demonstrate the breadth of applications for which our results can be used.
Numerical experiments on solving a large number of linear and convex quadratic
semidefinite programming problems are conducted to illustrate how the
theoretical results established here can lead to improvements on the
corresponding practical implementations