204 research outputs found
Truncated Nuclear Norm Minimization for Image Restoration Based On Iterative Support Detection
Recovering a large matrix from limited measurements is a challenging task
arising in many real applications, such as image inpainting, compressive
sensing and medical imaging, and this kind of problems are mostly formulated as
low-rank matrix approximation problems. Due to the rank operator being
non-convex and discontinuous, most of the recent theoretical studies use the
nuclear norm as a convex relaxation and the low-rank matrix recovery problem is
solved through minimization of the nuclear norm regularized problem. However, a
major limitation of nuclear norm minimization is that all the singular values
are simultaneously minimized and the rank may not be well approximated
\cite{hu2012fast}. Correspondingly, in this paper, we propose a new multi-stage
algorithm, which makes use of the concept of Truncated Nuclear Norm
Regularization (TNNR) proposed in \citep{hu2012fast} and Iterative Support
Detection (ISD) proposed in \citep{wang2010sparse} to overcome the above
limitation. Besides matrix completion problems considered in
\citep{hu2012fast}, the proposed method can be also extended to the general
low-rank matrix recovery problems. Extensive experiments well validate the
superiority of our new algorithms over other state-of-the-art methods
Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization
The Schatten-p quasi-norm is usually used to replace the standard
nuclear norm in order to approximate the rank function more accurately.
However, existing Schatten-p quasi-norm minimization algorithms involve
singular value decomposition (SVD) or eigenvalue decomposition (EVD) in each
iteration, and thus may become very slow and impractical for large-scale
problems. In this paper, we first define two tractable Schatten quasi-norms,
i.e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove
that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively,
which lead to the design of very efficient algorithms that only need to update
two much smaller factor matrices. We also design two efficient proximal
alternating linearized minimization algorithms for solving representative
matrix completion problems. Finally, we provide the global convergence and
performance guarantees for our algorithms, which have better convergence
properties than existing algorithms. Experimental results on synthetic and
real-world data show that our algorithms are more accurate than the
state-of-the-art methods, and are orders of magnitude faster.Comment: 16 pages, 5 figures, Appears in Proceedings of the 30th AAAI
Conference on Artificial Intelligence (AAAI), Phoenix, Arizona, USA, pp.
2016--2022, 201
- …