1,280 research outputs found
Robust Tensor Completion Using Transformed Tensor SVD
In this paper, we study robust tensor completion by using transformed tensor
singular value decomposition (SVD), which employs unitary transform matrices
instead of discrete Fourier transform matrix that is used in the traditional
tensor SVD. The main motivation is that a lower tubal rank tensor can be
obtained by using other unitary transform matrices than that by using discrete
Fourier transform matrix. This would be more effective for robust tensor
completion. Experimental results for hyperspectral, video and face datasets
have shown that the recovery performance for the robust tensor completion
problem by using transformed tensor SVD is better in PSNR than that by using
Fourier transform and other robust tensor completion methods
Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm
In this paper, we investigate tensor recovery problems within the tensor
singular value decomposition (t-SVD) framework. We propose the partial sum of
the tubal nuclear norm (PSTNN) of a tensor. The PSTNN is a surrogate of the
tensor tubal multi-rank. We build two PSTNN-based minimization models for two
typical tensor recovery problems, i.e., the tensor completion and the tensor
principal component analysis. We give two algorithms based on the alternating
direction method of multipliers (ADMM) to solve proposed PSTNN-based tensor
recovery models. Experimental results on the synthetic data and real-world data
reveal the superior of the proposed PSTNN
Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization
Currently, low-rank tensor completion has gained cumulative attention in
recovering incomplete visual data whose partial elements are missing. By taking
a color image or video as a three-dimensional (3D) tensor, previous studies
have suggested several definitions of tensor nuclear norm. However, they have
limitations and may not properly approximate the real rank of a tensor.
Besides, they do not explicitly use the low-rank property in optimization. It
is proved that the recently proposed truncated nuclear norm (TNN) can replace
the traditional nuclear norm, as a better estimation to the rank of a matrix.
Thus, this paper presents a new method called the tensor truncated nuclear norm
(T-TNN), which proposes a new definition of tensor nuclear norm and extends the
truncated nuclear norm from the matrix case to the tensor case. Beneficial from
the low rankness of TNN, our approach improves the efficacy of tensor
completion. We exploit the previously proposed tensor singular value
decomposition and the alternating direction method of multipliers in
optimization. Extensive experiments on real-world videos and images demonstrate
that the performance of our approach is superior to those of existing methods.Comment: Accepted as a poster presentation at the 24th International
Conference on Pattern Recognition in 20-24 August 2018, Beijing, Chin
Novel Factorization Strategies for Higher Order Tensors: Implications for Compression and Recovery of Multi-linear Data
In this paper we propose novel methods for compression and recovery of
multilinear data under limited sampling. We exploit the recently proposed
tensor- Singular Value Decomposition (t-SVD)[1], which is a group theoretic
framework for tensor decomposition. In contrast to popular existing tensor
decomposition techniques such as higher-order SVD (HOSVD), t-SVD has optimality
properties similar to the truncated SVD for matrices. Based on t-SVD, we first
construct novel tensor-rank like measures to characterize informational and
structural complexity of multilinear data. Following that we outline a
complexity penalized algorithm for tensor completion from missing entries. As
an application, 3-D and 4-D (color) video data compression and recovery are
considered. We show that videos with linear camera motion can be represented
more efficiently using t-SVD compared to traditional approaches based on
vectorizing or flattening of the tensors. Application of the proposed tensor
completion algorithm for video recovery from missing entries is shown to yield
a superior performance over existing methods. In conclusion we point out
several research directions and implications to online prediction of
multilinear data
A New Low-Rank Tensor Model for Video Completion
In this paper, we propose a new low-rank tensor model based on the circulant
algebra, namely, twist tensor nuclear norm or t-TNN for short. The twist tensor
denotes a 3-way tensor representation to laterally store 2D data slices in
order. On one hand, t-TNN convexly relaxes the tensor multi-rank of the twist
tensor in the Fourier domain, which allows an efficient computation using FFT.
On the other, t-TNN is equal to the nuclear norm of block circulant
matricization of the twist tensor in the original domain, which extends the
traditional matrix nuclear norm in a block circulant way. We test the t-TNN
model on a video completion application that aims to fill missing values and
the experiment results validate its effectiveness, especially when dealing with
video recorded by a non-stationary panning camera. The block circulant
matricization of the twist tensor can be transformed into a circulant block
representation with nuclear norm invariance. This representation, after
transformation, exploits the horizontal translation relationship between the
frames in a video, and endows the t-TNN model with a more powerful ability to
reconstruct panning videos than the existing state-of-the-art low-rank models.Comment: 8 pages, 11 figures, 1 tabl
Low-M-Rank Tensor Completion and Robust Tensor PCA
In this paper, we propose a new approach to solve low-rank tensor completion
and robust tensor PCA. Our approach is based on some novel notion of
(even-order) tensor ranks, to be called the M-rank, the symmetric M-rank, and
the strongly symmetric M-rank. We discuss the connections between these new
tensor ranks and the CP-rank and the symmetric CP-rank of an even-order tensor.
We show that the M-rank provides a reliable and easy-computable approximation
to the CP-rank. As a result, we propose to replace the CP-rank by the M-rank in
the low-CP-rank tensor completion and robust tensor PCA. Numerical results
suggest that our new approach based on the M-rank outperforms existing methods
that are based on low-n-rank, t-SVD and KBR approaches for solving low-rank
tensor completion and robust tensor PCA when the underlying tensor has low
CP-rank
Multilinear Map Layer: Prediction Regularization by Structural Constraint
In this paper we propose and study a technique to impose structural
constraints on the output of a neural network, which can reduce amount of
computation and number of parameters besides improving prediction accuracy when
the output is known to approximately conform to the low-rankness prior. The
technique proceeds by replacing the output layer of neural network with the
so-called MLM layers, which forces the output to be the result of some
Multilinear Map, like a hybrid-Kronecker-dot product or Kronecker Tensor
Product. In particular, given an "autoencoder" model trained on SVHN dataset,
we can construct a new model with MLM layer achieving 62\% reduction in total
number of parameters and reduction of reconstruction error from 0.088
to 0.004. Further experiments on other autoencoder model variants trained on
SVHN datasets also demonstrate the efficacy of MLM layers
Tensor Ring Decomposition
Tensor networks have in recent years emerged as the powerful tools for
solving the large-scale optimization problems. One of the most popular tensor
network is tensor train (TT) decomposition that acts as the building blocks for
the complicated tensor networks. However, the TT decomposition highly depends
on permutations of tensor dimensions, due to its strictly sequential
multilinear products over latent cores, which leads to difficulties in finding
the optimal TT representation. In this paper, we introduce a fundamental tensor
decomposition model to represent a large dimensional tensor by a circular
multilinear products over a sequence of low dimensional cores, which can be
graphically interpreted as a cyclic interconnection of 3rd-order tensors, and
thus termed as tensor ring (TR) decomposition. The key advantage of TR model is
the circular dimensional permutation invariance which is gained by employing
the trace operation and treating the latent cores equivalently. TR model can be
viewed as a linear combination of TT decompositions, thus obtaining the
powerful and generalized representation abilities. For optimization of latent
cores, we present four different algorithms based on the sequential SVDs, ALS
scheme, and block-wise ALS techniques. Furthermore, the mathematical properties
of TR model are investigated, which shows that the basic multilinear algebra
can be performed efficiently by using TR representaions and the classical
tensor decompositions can be conveniently transformed into the TR
representation. Finally, the experiments on both synthetic signals and
real-world datasets were conducted to evaluate the performance of different
algorithms
Minimum -Rank Approximation via Iterative Hard Thresholding
The problem of recovering a low -rank tensor is an extension of sparse
recovery problem from the low dimensional space (matrix space) to the high
dimensional space (tensor space) and has many applications in computer vision
and graphics such as image inpainting and video inpainting. In this paper, we
consider a new tensor recovery model, named as minimum -rank approximation
(MnRA), and propose an appropriate iterative hard thresholding algorithm with
giving the upper bound of the -rank in advance. The convergence analysis of
the proposed algorithm is also presented. Particularly, we show that for the
noiseless case, the linear convergence with rate can be obtained
for the proposed algorithm under proper conditions. Additionally, combining an
effective heuristic for determining -rank, we can also apply the proposed
algorithm to solve MnRA when -rank is unknown in advance. Some preliminary
numerical results on randomly generated and real low -rank tensor completion
problems are reported, which show the efficiency of the proposed algorithms.Comment: Iterative hard thresholding; low--rank tensor recovery; tensor
completion; compressed sensin
Beating level-set methods for 3D seismic data interpolation: a primal-dual alternating approach
Acquisition cost is a crucial bottleneck for seismic workflows, and low-rank
formulations for data interpolation allow practitioners to `fill in' data
volumes from critically subsampled data acquired in the field. Tremendous size
of seismic data volumes required for seismic processing remains a major
challenge for these techniques.
We propose a new approach to solve residual constrained formulations for
interpolation. We represent the data volume using matrix factors, and build a
block-coordinate algorithm with constrained convex subproblems that are solved
with a primal-dual splitting scheme. The new approach is competitive with state
of the art level-set algorithms that interchange the role of objectives with
constraints. We use the new algorithm to successfully interpolate a large scale
5D seismic data volume, generated from the geologically complex synthetic 3D
Compass velocity model, where 80% of the data has been removed.Comment: 16 pages, 7 figure
- …