1,861 research outputs found
Tensor train rank minimization with nonlocal self-similarity for tensor completion
The tensor train (TT) rank has received increasing attention in tensor
completion due to its ability to capture the global correlation of high-order
tensors (). For third order visual data, direct TT rank
minimization has not exploited the potential of TT rank for high-order tensors.
The TT rank minimization accompany with \emph{ket augmentation}, which
transforms a lower-order tensor (e.g., visual data) into a higher-order tensor,
suffers from serious block-artifacts. To tackle this issue, we suggest the TT
rank minimization with nonlocal self-similarity for tensor completion by
simultaneously exploring the spatial, temporal/spectral, and nonlocal
redundancy in visual data. More precisely, the TT rank minimization is
performed on a formed higher-order tensor called group by stacking similar
cubes, which naturally and fully takes advantage of the ability of TT rank for
high-order tensors. Moreover, the perturbation analysis for the TT low-rankness
of each group is established. We develop the alternating direction method of
multipliers tailored for the specific structure to solve the proposed model.
Extensive experiments demonstrate that the proposed method is superior to
several existing state-of-the-art methods in terms of both qualitative and
quantitative measures
Concatenated image completion via tensor augmentation and completion
This paper proposes a novel framework called concatenated image completion
via tensor augmentation and completion (ICTAC), which recovers missing entries
of color images with high accuracy. Typical images are second- or third-order
tensors (2D/3D) depending if they are grayscale or color, hence tensor
completion algorithms are ideal for their recovery. The proposed framework
performs image completion by concatenating copies of a single image that has
missing entries into a third-order tensor, applying a dimensionality
augmentation technique to the tensor, utilizing a tensor completion algorithm
for recovering its missing entries, and finally extracting the recovered image
from the tensor. The solution relies on two key components that have been
recently proposed to take advantage of the tensor train (TT) rank: A tensor
augmentation tool called ket augmentation (KA) that represents a low-order
tensor by a higher-order tensor, and the algorithm tensor completion by
parallel matrix factorization via tensor train (TMac-TT), which has been
demonstrated to outperform state-of-the-art tensor completion algorithms.
Simulation results for color image recovery show the clear advantage of our
framework against current state-of-the-art tensor completion algorithms.Comment: 7 pages, 6 figures, submitted to ICSPCS 201
Efficient tensor completion for color image and video recovery: Low-rank tensor train
This paper proposes a novel approach to tensor completion, which recovers
missing entries of data represented by tensors. The approach is based on the
tensor train (TT) rank, which is able to capture hidden information from
tensors thanks to its definition from a well-balanced matricization scheme.
Accordingly, new optimization formulations for tensor completion are proposed
as well as two new algorithms for their solution. The first one called simple
low-rank tensor completion via tensor train (SiLRTC-TT) is intimately related
to minimizing a nuclear norm based on TT rank. The second one is from a
multilinear matrix factorization model to approximate the TT rank of a tensor,
and is called tensor completion by parallel matrix factorization via tensor
train (TMac-TT). A tensor augmentation scheme of transforming a low-order
tensor to higher-orders is also proposed to enhance the effectiveness of
SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery
show the clear advantage of our method over all other methods.Comment: Submitted to the IEEE Transactions on Image Processing. arXiv admin
note: substantial text overlap with arXiv:1601.0108
High-order Tensor Completion for Data Recovery via Sparse Tensor-train Optimization
In this paper, we aim at the problem of tensor data completion. Tensor-train
decomposition is adopted because of its powerful representation ability and
linear scalability to tensor order. We propose an algorithm named Sparse
Tensor-train Optimization (STTO) which considers incomplete data as sparse
tensor and uses first-order optimization method to find the factors of
tensor-train decomposition. Our algorithm is shown to perform well in
simulation experiments at both low-order cases and high-order cases. We also
employ a tensorization method to transform data to a higher-order form to
enhance the performance of our algorithm. The results of image recovery
experiments in various cases manifest that our method outperforms other
completion algorithms. Especially when the missing rate is very high, e.g.,
90\% to 99\%, our method is significantly better than the state-of-the-art
methods.Comment: 5 pages (include 1 page of reference) ICASSP 201
Scaled Nuclear Norm Minimization for Low-Rank Tensor Completion
Minimizing the nuclear norm of a matrix has been shown to be very efficient
in reconstructing a low-rank sampled matrix. Furthermore, minimizing the sum of
nuclear norms of matricizations of a tensor has been shown to be very efficient
in recovering a low-Tucker-rank sampled tensor. In this paper, we propose to
recover a low-TT-rank sampled tensor by minimizing a weighted sum of nuclear
norms of unfoldings of the tensor. We provide numerical results to show that
our proposed method requires significantly less number of samples to recover to
the original tensor in comparison with simply minimizing the sum of nuclear
norms since the structure of the unfoldings in the TT tensor model is
fundamentally different from that of matricizations in the Tucker tensor model
Efficient tensor completion: Low-rank tensor train
This paper proposes a novel formulation of the tensor completion problem to
impute missing entries of data represented by tensors. The formulation is
introduced in terms of tensor train (TT) rank which can effectively capture
global information of tensors thanks to its construction by a well-balanced
matricization scheme. Two algorithms are proposed to solve the corresponding
tensor completion problem. The first one called simple low-rank tensor
completion via tensor train (SiLRTC-TT) is intimately related to minimizing the
TT nuclear norm. The second one is based on a multilinear matrix factorization
model to approximate the TT rank of the tensor and called tensor completion by
parallel matrix factorization via tensor train (TMac-TT). These algorithms are
applied to complete both synthetic and real world data tensors. Simulation
results of synthetic data show that the proposed algorithms are efficient in
estimating missing entries for tensors with either low Tucker rank or TT rank
while Tucker-based algorithms are only comparable in the case of low Tucker
rank tensors. When applied to recover color images represented by ninth-order
tensors augmented from third-order ones, the proposed algorithms outperforms
the Tucker-based algorithms.Comment: 11 pages, 9 figure
Tensor Ring Decomposition with Rank Minimization on Latent Space: An Efficient Approach for Tensor Completion
In tensor completion tasks, the traditional low-rank tensor decomposition
models suffer from the laborious model selection problem due to their high
model sensitivity. In particular, for tensor ring (TR) decomposition, the
number of model possibilities grows exponentially with the tensor order, which
makes it rather challenging to find the optimal TR decomposition. In this
paper, by exploiting the low-rank structure of the TR latent space, we propose
a novel tensor completion method which is robust to model selection. In
contrast to imposing the low-rank constraint on the data space, we introduce
nuclear norm regularization on the latent TR factors, resulting in the
optimization step using singular value decomposition (SVD) being performed at a
much smaller scale. By leveraging the alternating direction method of
multipliers (ADMM) scheme, the latent TR factors with optimal rank and the
recovered tensor can be obtained simultaneously. Our proposed algorithm is
shown to effectively alleviate the burden of TR-rank selection, thereby greatly
reducing the computational cost. The extensive experimental results on both
synthetic and real-world data demonstrate the superior performance and
efficiency of the proposed approach against the state-of-the-art algorithms
Robust Low-Rank Tensor Ring Completion
Low-rank tensor completion recovers missing entries based on different tensor
decompositions. Due to its outstanding performance in exploiting some
higher-order data structure, low rank tensor ring has been applied in tensor
completion. To further deal with its sensitivity to sparse component as it does
in tensor principle component analysis, we propose robust tensor ring
completion (RTRC), which separates latent low-rank tensor component from sparse
component with limited number of measurements. The low rank tensor component is
constrained by the weighted sum of nuclear norms of its balanced unfoldings,
while the sparse component is regularized by its l1 norm. We analyze the RTRC
model and gives the exact recovery guarantee. The alternating direction method
of multipliers is used to divide the problem into several sub-problems with
fast solutions. In numerical experiments, we verify the recovery condition of
the proposed method on synthetic data, and show the proposed method outperforms
the state-of-the-art ones in terms of both accuracy and computational
complexity in a number of real-world data based tasks, i.e., light-field image
recovery, shadow removal in face images, and background extraction in color
video
Tensor-Ring Nuclear Norm Minimization and Application for Visual Data Completion
Tensor ring (TR) decomposition has been successfully used to obtain the
state-of-the-art performance in the visual data completion problem. However,
the existing TR-based completion methods are severely non-convex and
computationally demanding. In addition, the determination of the optimal TR
rank is a tough work in practice. To overcome these drawbacks, we first
introduce a class of new tensor nuclear norms by using tensor circular
unfolding. Then we theoretically establish connection between the rank of the
circularly-unfolded matrices and the TR ranks. We also develop an efficient
tensor completion algorithm by minimizing the proposed tensor nuclear norm.
Extensive experimental results demonstrate that our proposed tensor completion
method outperforms the conventional tensor completion methods in the
image/video in-painting problem with striped missing values.Comment: This paper has been accepted by ICASSP 201
Tensor Completion Algorithms in Big Data Analytics
Tensor completion is a problem of filling the missing or unobserved entries
of partially observed tensors. Due to the multidimensional character of tensors
in describing complex datasets, tensor completion algorithms and their
applications have received wide attention and achievement in areas like data
mining, computer vision, signal processing, and neuroscience. In this survey,
we provide a modern overview of recent advances in tensor completion algorithms
from the perspective of big data analytics characterized by diverse variety,
large volume, and high velocity. We characterize these advances from four
perspectives: general tensor completion algorithms, tensor completion with
auxiliary information (variety), scalable tensor completion algorithms
(volume), and dynamic tensor completion algorithms (velocity). Further, we
identify several tensor completion applications on real-world data-driven
problems and present some common experimental frameworks popularized in the
literature. Our goal is to summarize these popular methods and introduce them
to researchers and practitioners for promoting future research and
applications. We conclude with a discussion of key challenges and promising
research directions in this community for future exploration
- …