2,324 research outputs found
Exact tensor completion using t-SVD
In this paper we focus on the problem of completion of multidimensional
arrays (also referred to as tensors) from limited sampling. Our approach is
based on a recently proposed tensor-Singular Value Decomposition (t-SVD) [1].
Using this factorization one can derive notion of tensor rank, referred to as
the tensor tubal rank, which has optimality properties similar to that of
matrix rank derived from SVD. As shown in [2] some multidimensional data, such
as panning video sequences exhibit low tensor tubal rank and we look at the
problem of completing such data under random sampling of the data cube. We show
that by solving a convex optimization problem, which minimizes the tensor
nuclear norm obtained as the convex relaxation of tensor tubal rank, one can
guarantee recovery with overwhelming probability as long as samples in
proportion to the degrees of freedom in t-SVD are observed. In this sense our
results are order-wise optimal. The conditions under which this result holds
are very similar to the incoherency conditions for the matrix completion,
albeit we define incoherency under the algebraic set-up of t-SVD. We show the
performance of the algorithm on some real data sets and compare it with other
existing approaches based on tensor flattening and Tucker decomposition.Comment: 16 pages, 5 figures, 2 table
Fast Randomized Algorithms for t-Product Based Tensor Operations and Decompositions with Applications to Imaging Data
Tensors of order three or higher have found applications in diverse fields,
including image and signal processing, data mining, biomedical engineering and
link analysis, to name a few. In many applications that involve for example
time series or other ordered data, the corresponding tensor has a
distinguishing orientation that exhibits a low tubal structure. This has
motivated the introduction of the tubal rank and the corresponding tubal
singular value decomposition in the literature. In this work, we develop
randomized algorithms for many common tensor operations, including tensor
low-rank approximation and decomposition, together with tensor multiplication.
The proposed tubal focused algorithms employ a small number of lateral and/or
horizontal slices of the underlying 3-rd order tensor, that come with {\em
relative error guarantees} for the quality of the obtained solutions. The
performance of the proposed algorithms is illustrated on diverse imaging
applications, including mass spectrometry data and image and video recovery
from incomplete and noisy data. The results show both good computational
speed-up vis-a-vis conventional completion algorithms and good accuracy.Comment: 31 pages, 6 figures, to appear in the SIAM Journal on Imaging
Science
Accelerated and Inexact Soft-Impute for Large-Scale Matrix and Tensor Completion
Matrix and tensor completion aim to recover a low-rank matrix / tensor from
limited observations and have been commonly used in applications such as
recommender systems and multi-relational data mining. A state-of-the-art matrix
completion algorithm is Soft-Impute, which exploits the special "sparse plus
low-rank" structure of the matrix iterates to allow efficient SVD in each
iteration. Though Soft-Impute is a proximal algorithm, it is generally believed
that acceleration destroys the special structure and is thus not useful. In
this paper, we show that Soft-Impute can indeed be accelerated without
comprising this structure. To further reduce the iteration time complexity, we
propose an approximate singular value thresholding scheme based on the power
method. Theoretical analysis shows that the proposed algorithm still enjoys the
fast convergence rate of accelerated proximal algorithms. We further
extend the proposed algorithm to tensor completion with the scaled latent
nuclear norm regularizer. We show that a similar "sparse plus low-rank"
structure also exists, leading to low iteration complexity and fast
convergence rate. Extensive experiments demonstrate that the proposed algorithm
is much faster than Soft-Impute and other state-of-the-art matrix and tensor
completion algorithms.Comment: Journal version of previous conference paper 'Accelerated inexact
soft-impute for fast large-scale matrix completion' appeared at IJCAI 201
Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm
In this paper, we investigate tensor recovery problems within the tensor
singular value decomposition (t-SVD) framework. We propose the partial sum of
the tubal nuclear norm (PSTNN) of a tensor. The PSTNN is a surrogate of the
tensor tubal multi-rank. We build two PSTNN-based minimization models for two
typical tensor recovery problems, i.e., the tensor completion and the tensor
principal component analysis. We give two algorithms based on the alternating
direction method of multipliers (ADMM) to solve proposed PSTNN-based tensor
recovery models. Experimental results on the synthetic data and real-world data
reveal the superior of the proposed PSTNN
Non-convex Penalty for Tensor Completion and Robust PCA
In this paper, we propose a novel non-convex tensor rank surrogate function
and a novel non-convex sparsity measure for tensor. The basic idea is to
sidestep the bias of norm by introducing concavity. Furthermore, we
employ the proposed non-convex penalties in tensor recovery problems such as
tensor completion and tensor robust principal component analysis, which has
various real applications such as image inpainting and denoising. Due to the
concavity, the models are difficult to solve. To tackle this problem, we devise
majorization minimization algorithms, which optimize upper bounds of original
functions in each iteration, and every sub-problem is solved by alternating
direction multiplier method. Finally, experimental results on natural images
and hyperspectral images demonstrate the effectiveness and efficiency of the
proposed methods
Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization
This paper studies the Tensor Robust Principal Component (TRPCA) problem
which extends the known Robust PCA (Candes et al. 2011) to the tensor case. Our
model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and
Martin 2011) and its induced tensor tubal rank and tensor nuclear norm.
Consider that we have a 3-way tensor such that ,
where has low tubal rank and is sparse. Is
that possible to recover both components? In this work, we prove that under
certain suitable assumptions, we can recover both the low-rank and the sparse
components exactly by simply solving a convex program whose objective is a
weighted combination of the tensor nuclear norm and the -norm, i.e.,
$\min_{{\mathcal{L}},\ {\mathcal{E}}} \
\|{{\mathcal{L}}}\|_*+\lambda\|{{\mathcal{E}}}\|_1, \ \text{s.t.} \
{\mathcal{X}}={\mathcal{L}}+{\mathcal{E}}\lambda=
{1}/{\sqrt{\max(n_1,n_2)n_3}}n_3=1$ and thus it is a simple and elegant tensor extension of RPCA.
Also numerical experiments verify our theory and the application for the image
denoising demonstrates the effectiveness of our method.Comment: IEEE International Conference on Computer Vision and Pattern
Recognition (CVPR, 2016
Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization
Currently, low-rank tensor completion has gained cumulative attention in
recovering incomplete visual data whose partial elements are missing. By taking
a color image or video as a three-dimensional (3D) tensor, previous studies
have suggested several definitions of tensor nuclear norm. However, they have
limitations and may not properly approximate the real rank of a tensor.
Besides, they do not explicitly use the low-rank property in optimization. It
is proved that the recently proposed truncated nuclear norm (TNN) can replace
the traditional nuclear norm, as a better estimation to the rank of a matrix.
Thus, this paper presents a new method called the tensor truncated nuclear norm
(T-TNN), which proposes a new definition of tensor nuclear norm and extends the
truncated nuclear norm from the matrix case to the tensor case. Beneficial from
the low rankness of TNN, our approach improves the efficacy of tensor
completion. We exploit the previously proposed tensor singular value
decomposition and the alternating direction method of multipliers in
optimization. Extensive experiments on real-world videos and images demonstrate
that the performance of our approach is superior to those of existing methods.Comment: Accepted as a poster presentation at the 24th International
Conference on Pattern Recognition in 20-24 August 2018, Beijing, Chin
A Randomized Tensor Train Singular Value Decomposition
The hierarchical SVD provides a quasi-best low rank approximation of high
dimensional data in the hierarchical Tucker framework. Similar to the SVD for
matrices, it provides a fundamental but expensive tool for tensor computations.
In the present work we examine generalizations of randomized matrix
decomposition methods to higher order tensors in the framework of the
hierarchical tensors representation. In particular we present and analyze a
randomized algorithm for the calculation of the hierarchical SVD (HSVD) for the
tensor train (TT) format
A Fast Algorithm for Cosine Transform Based Tensor Singular Value Decomposition
Recently, there has been a lot of research into tensor singular value
decomposition (t-SVD) by using discrete Fourier transform (DFT) matrix. The
main aims of this paper are to propose and study tensor singular value
decomposition based on the discrete cosine transform (DCT) matrix. The
advantages of using DCT are that (i) the complex arithmetic is not involved in
the cosine transform based tensor singular value decomposition, so the
computational cost required can be saved; (ii) the intrinsic reflexive boundary
condition along the tubes in the third dimension of tensors is employed, so its
performance would be better than that by using the periodic boundary condition
in DFT. We demonstrate that the tensor product between two tensors by using DCT
can be equivalent to the multiplication between a block Toeplitz-plus-Hankel
matrix and a block vector. Numerical examples of low-rank tensor completion are
further given to illustrate that the efficiency by using DCT is two times
faster than that by using DFT and also the errors of video and multispectral
image completion by using DCT are smaller than those by using DFT
Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm
In this paper, we consider the Tensor Robust Principal Component Analysis
(TRPCA) problem, which aims to exactly recover the low-rank and sparse
components from their sum. Our model is based on the recently proposed
tensor-tensor product (or t-product). Induced by the t-product, we first
rigorously deduce the tensor spectral norm, tensor nuclear norm, and tensor
average rank, and show that the tensor nuclear norm is the convex envelope of
the tensor average rank within the unit ball of the tensor spectral norm. These
definitions, their relationships and properties are consistent with matrix
cases. Equipped with the new tensor nuclear norm, we then solve the TRPCA
problem by solving a convex program and provide the theoretical guarantee for
the exact recovery. Our TRPCA model and recovery guarantee include matrix RPCA
as a special case. Numerical experiments verify our results, and the
applications to image recovery and background modeling problems demonstrate the
effectiveness of our method.Comment: arXiv admin note: text overlap with arXiv:1708.0418
- β¦