33 research outputs found
Iterative Singular Tube Hard Thresholding Algorithms for Tensor Completion
Due to the explosive growth of large-scale data sets, tensors have been a
vital tool to analyze and process high-dimensional data. Different from the
matrix case, tensor decomposition has been defined in various formats, which
can be further used to define the best low-rank approximation of a tensor to
significantly reduce the dimensionality for signal compression and recovery. In
this paper, we consider the low-rank tensor completion problem. We propose a
novel class of iterative singular tube hard thresholding algorithms for tensor
completion based on the low-tubal-rank tensor approximation, including basic,
accelerated deterministic and stochastic versions. Convergence guarantees are
provided along with the special case when the measurements are linear.
Numerical experiments on tensor compressive sensing and color image inpainting
are conducted to demonstrate convergence and computational efficiency in
practice
Tensor Completion via Leverage Sampling and Tensor QR Decomposition for Network Latency Estimation
In this paper, we consider the network latency estimation, which has been an
important metric for network performance. However, a large scale of network
latency estimation requires a lot of computing time. Therefore, we propose a
new method that is much faster and maintains high accuracy. The data structure
of network nodes can form a matrix, and the tensor model can be formed by
introducing the time dimension. Thus, the entire problem can be be summarized
as a tensor completion problem. The main idea of our method is improving the
tensor leverage sampling strategy and introduce tensor QR decomposition into
tensor completion. To achieve faster tensor leverage sampling, we replace
tensor singular decomposition (t-SVD) with tensor CSVD-QR to appoximate t-SVD.
To achieve faster completion for incomplete tensor, we use the tensor
-norm rather than traditional tensor nuclear norm. Furthermore, we
introduce tensor QR decomposition into alternating direction method of
multipliers (ADMM) framework. Numerical experiments witness that our method is
faster than state-of-art algorithms with satisfactory accuracy.Comment: 20 pages, 7 figure