1,988 research outputs found
Randomized Tensor Ring Decomposition and Its Application to Large-scale Data Reconstruction
Dimensionality reduction is an essential technique for multi-way large-scale
data, i.e., tensor. Tensor ring (TR) decomposition has become popular due to
its high representation ability and flexibility. However, the traditional TR
decomposition algorithms suffer from high computational cost when facing
large-scale data. In this paper, taking advantages of the recently proposed
tensor random projection method, we propose two TR decomposition algorithms. By
employing random projection on every mode of the large-scale tensor, the TR
decomposition can be processed at a much smaller scale. The simulation
experiment shows that the proposed algorithms are times faster than
traditional algorithms without loss of accuracy, and our algorithms show
superior performance in deep learning dataset compression and hyperspectral
image reconstruction experiments compared to other randomized algorithms.Comment: ICASSP submissio
Block-Randomized Gradient Descent Methods with Importance Sampling for CP Tensor Decomposition
This work considers the problem of computing the CANDECOMP/PARAFAC (CP)
decomposition of large tensors. One popular way is to translate the problem
into a sequence of overdetermined least squares subproblems with Khatri-Rao
product (KRP) structure. In this work, for tensor with different levels of
importance in each fiber, combining stochastic optimization with randomized
sampling, we present a mini-batch stochastic gradient descent algorithm with
importance sampling for those special least squares subproblems. Four different
sampling strategies are provided. They can avoid forming the full KRP or
corresponding probabilities and sample the desired fibers from the original
tensor directly. Moreover, a more practical algorithm with adaptive step size
is also given. For the proposed algorithms, we present their convergence
properties and numerical performance. The results on synthetic data show that
our algorithms outperform the existing algorithms in terms of accuracy or the
number of iterations
- β¦