4,429 research outputs found
Exact Recovery of Tensor Robust Principal Component Analysis under Linear Transforms
This work studies the Tensor Robust Principal Component Analysis (TRPCA)
problem, which aims to exactly recover the low-rank and sparse components from
their sum. Our model is motivated by the recently proposed linear transforms
based tensor-tensor product and tensor SVD. We define a new transforms depended
tensor rank and the corresponding tensor nuclear norm. Then we solve the TRPCA
problem by convex optimization whose objective is a weighted combination of the
new tensor nuclear norm and the -norm. In theory, we show that under
certain incoherence conditions, the convex program exactly recovers the
underlying low-rank and sparse components. It is of great interest that our new
TRPCA model generalizes existing works. In particular, if the studied tensor
reduces to a matrix, our TRPCA model reduces to the known matrix RPCA. Our new
TRPCA which is allowed to use general linear transforms can be regarded as an
extension of our former TRPCA work which uses the discrete Fourier transform.
But their proof of the recovery guarantee is different. Numerical experiments
verify our results and the application on image recovery demonstrates the
superiority of our method.Comment: arXiv admin note: text overlap with arXiv:1804.03728; text overlap
with arXiv:1311.6182 by other author
Robust Low-Rank Tensor Ring Completion
Low-rank tensor completion recovers missing entries based on different tensor
decompositions. Due to its outstanding performance in exploiting some
higher-order data structure, low rank tensor ring has been applied in tensor
completion. To further deal with its sensitivity to sparse component as it does
in tensor principle component analysis, we propose robust tensor ring
completion (RTRC), which separates latent low-rank tensor component from sparse
component with limited number of measurements. The low rank tensor component is
constrained by the weighted sum of nuclear norms of its balanced unfoldings,
while the sparse component is regularized by its l1 norm. We analyze the RTRC
model and gives the exact recovery guarantee. The alternating direction method
of multipliers is used to divide the problem into several sub-problems with
fast solutions. In numerical experiments, we verify the recovery condition of
the proposed method on synthetic data, and show the proposed method outperforms
the state-of-the-art ones in terms of both accuracy and computational
complexity in a number of real-world data based tasks, i.e., light-field image
recovery, shadow removal in face images, and background extraction in color
video
Color Image and Multispectral Image Denoising Using Block Diagonal Representation
Filtering images of more than one channel is challenging in terms of both
efficiency and effectiveness. By grouping similar patches to utilize the
self-similarity and sparse linear approximation of natural images, recent
nonlocal and transform-domain methods have been widely used in color and
multispectral image (MSI) denoising. Many related methods focus on the modeling
of group level correlation to enhance sparsity, which often resorts to a
recursive strategy with a large number of similar patches. The importance of
the patch level representation is understated. In this paper, we mainly
investigate the influence and potential of representation at patch level by
considering a general formulation with block diagonal matrix. We further show
that by training a proper global patch basis, along with a local principal
component analysis transform in the grouping dimension, a simple
transform-threshold-inverse method could produce very competitive results. Fast
implementation is also developed to reduce computational complexity. Extensive
experiments on both simulated and real datasets demonstrate its robustness,
effectiveness and efficiency
Non-convex Penalty for Tensor Completion and Robust PCA
In this paper, we propose a novel non-convex tensor rank surrogate function
and a novel non-convex sparsity measure for tensor. The basic idea is to
sidestep the bias of norm by introducing concavity. Furthermore, we
employ the proposed non-convex penalties in tensor recovery problems such as
tensor completion and tensor robust principal component analysis, which has
various real applications such as image inpainting and denoising. Due to the
concavity, the models are difficult to solve. To tackle this problem, we devise
majorization minimization algorithms, which optimize upper bounds of original
functions in each iteration, and every sub-problem is solved by alternating
direction multiplier method. Finally, experimental results on natural images
and hyperspectral images demonstrate the effectiveness and efficiency of the
proposed methods
Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm
In this paper, we consider the Tensor Robust Principal Component Analysis
(TRPCA) problem, which aims to exactly recover the low-rank and sparse
components from their sum. Our model is based on the recently proposed
tensor-tensor product (or t-product). Induced by the t-product, we first
rigorously deduce the tensor spectral norm, tensor nuclear norm, and tensor
average rank, and show that the tensor nuclear norm is the convex envelope of
the tensor average rank within the unit ball of the tensor spectral norm. These
definitions, their relationships and properties are consistent with matrix
cases. Equipped with the new tensor nuclear norm, we then solve the TRPCA
problem by solving a convex program and provide the theoretical guarantee for
the exact recovery. Our TRPCA model and recovery guarantee include matrix RPCA
as a special case. Numerical experiments verify our results, and the
applications to image recovery and background modeling problems demonstrate the
effectiveness of our method.Comment: arXiv admin note: text overlap with arXiv:1708.0418
Robust Tensor Completion Using Transformed Tensor SVD
In this paper, we study robust tensor completion by using transformed tensor
singular value decomposition (SVD), which employs unitary transform matrices
instead of discrete Fourier transform matrix that is used in the traditional
tensor SVD. The main motivation is that a lower tubal rank tensor can be
obtained by using other unitary transform matrices than that by using discrete
Fourier transform matrix. This would be more effective for robust tensor
completion. Experimental results for hyperspectral, video and face datasets
have shown that the recovery performance for the robust tensor completion
problem by using transformed tensor SVD is better in PSNR than that by using
Fourier transform and other robust tensor completion methods
Exploiting the structure effectively and efficiently in low rank matrix recovery
Low rank model arises from a wide range of applications, including machine
learning, signal processing, computer algebra, computer vision, and imaging
science. Low rank matrix recovery is about reconstructing a low rank matrix
from incomplete measurements. In this survey we review recent developments on
low rank matrix recovery, focusing on three typical scenarios: matrix sensing,
matrix completion and phase retrieval. An overview of effective and efficient
approaches for the problem is given, including nuclear norm minimization,
projected gradient descent based on matrix factorization, and Riemannian
optimization based on the embedded manifold of low rank matrices. Numerical
recipes of different approaches are emphasized while accompanied by the
corresponding theoretical recovery guarantees
Framelet Representation of Tensor Nuclear Norm for Third-Order Tensor Completion
The main aim of this paper is to develop a framelet representation of the
tensor nuclear norm for third-order tensor completion. In the literature, the
tensor nuclear norm can be computed by using tensor singular value
decomposition based on the discrete Fourier transform matrix, and tensor
completion can be performed by the minimization of the tensor nuclear norm
which is the relaxation of the sum of matrix ranks from all Fourier transformed
matrix frontal slices. These Fourier transformed matrix frontal slices are
obtained by applying the discrete Fourier transform on the tubes of the
original tensor. In this paper, we propose to employ the framelet
representation of each tube so that a framelet transformed tensor can be
constructed. Because of framelet basis redundancy, the representation of each
tube is sparsely represented. When the matrix slices of the original tensor are
highly correlated, we expect the corresponding sum of matrix ranks from all
framelet transformed matrix frontal slices would be small, and the resulting
tensor completion can be performed much better. The proposed minimization model
is convex and global minimizers can be obtained. Numerical results on several
types of multi-dimensional data (videos, multispectral images, and magnetic
resonance imaging data) have tested and shown that the proposed method
outperformed the other testing methods
Fast Randomized Singular Value Thresholding for Low-rank Optimization
Rank minimization can be converted into tractable surrogate problems, such as
Nuclear Norm Minimization (NNM) and Weighted NNM (WNNM). The problems related
to NNM, or WNNM, can be solved iteratively by applying a closed-form proximal
operator, called Singular Value Thresholding (SVT), or Weighted SVT, but they
suffer from high computational cost of Singular Value Decomposition (SVD) at
each iteration. We propose a fast and accurate approximation method for SVT,
that we call fast randomized SVT (FRSVT), with which we avoid direct
computation of SVD. The key idea is to extract an approximate basis for the
range of the matrix from its compressed matrix. Given the basis, we compute
partial singular values of the original matrix from the small factored matrix.
In addition, by developping a range propagation method, our method further
speeds up the extraction of approximate basis at each iteration. Our
theoretical analysis shows the relationship between the approximation bound of
SVD and its effect to NNM via SVT. Along with the analysis, our empirical
results quantitatively and qualitatively show that our approximation rarely
harms the convergence of the host algorithms. We assess the efficiency and
accuracy of the proposed method on various computer vision problems, e.g.,
subspace clustering, weather artifact removal, and simultaneous multi-image
alignment and rectification.Comment: Appeared in CVPR 2015, and under major revision of TPAMI. Source code
is available on http://thoh.kaist.ac.k
Frequency-Weighted Robust Tensor Principal Component Analysis
Robust tensor principal component analysis (RTPCA) can separate the low-rank
component and sparse component from multidimensional data, which has been used
successfully in several image applications. Its performance varies with
different kinds of tensor decompositions, and the tensor singular value
decomposition (t-SVD) is a popularly selected one. The standard t-SVD takes the
discrete Fourier transform to exploit the residual in the 3rd mode in the
decomposition. When minimizing the tensor nuclear norm related to t-SVD, all
the frontal slices in frequency domain are optimized equally. In this paper, we
incorporate frequency component analysis into t-SVD to enhance the RTPCA
performance. Specially, different frequency bands are unequally weighted with
respect to the corresponding physical meanings, and the frequency-weighted
tensor nuclear norm can be obtained. Accordingly we rigorously deduce the
frequency-weighted tensor singular value threshold operator, and apply it for
low rank approximation subproblem in RTPCA. The newly obtained
frequency-weighted RTPCA can be solved by alternating direction method of
multipliers, and it is the first time that frequency analysis is taken in
tensor principal component analysis. Numerical experiments on synthetic 3D
data, color image denoising and background modeling verify that the proposed
method outperforms the state-of-the-art algorithms both in accuracy and
computational complexity
- β¦