4,663 research outputs found
Fundamental Conditions for Low-CP-Rank Tensor Completion
We consider the problem of low canonical polyadic (CP) rank tensor
completion. A completion is a tensor whose entries agree with the observed
entries and its rank matches the given CP rank. We analyze the manifold
structure corresponding to the tensors with the given rank and define a set of
polynomials based on the sampling pattern and CP decomposition. Then, we show
that finite completability of the sampled tensor is equivalent to having a
certain number of algebraically independent polynomials among the defined
polynomials. Our proposed approach results in characterizing the maximum number
of algebraically independent polynomials in terms of a simple geometric
structure of the sampling pattern, and therefore we obtain the deterministic
necessary and sufficient condition on the sampling pattern for finite
completability of the sampled tensor. Moreover, assuming that the entries of
the tensor are sampled independently with probability and using the
mentioned deterministic analysis, we propose a combinatorial method to derive a
lower bound on the sampling probability , or equivalently, the number of
sampled entries that guarantees finite completability with high probability. We
also show that the existing result for the matrix completion problem can be
used to obtain a loose lower bound on the sampling probability . In
addition, we obtain deterministic and probabilistic conditions for unique
completability. It is seen that the number of samples required for finite or
unique completability obtained by the proposed analysis on the CP manifold is
orders-of-magnitude lower than that is obtained by the existing analysis on the
Grassmannian manifold.Comment: arXiv admin note: text overlap with arXiv:1703.0769
Rank Determination for Low-Rank Data Completion
Recently, fundamental conditions on the sampling patterns have been obtained
for finite completability of low-rank matrices or tensors given the
corresponding ranks. In this paper, we consider the scenario where the rank is
not given and we aim to approximate the unknown rank based on the location of
sampled entries and some given completion. We consider a number of data models,
including single-view matrix, multi-view matrix, CP tensor, tensor-train tensor
and Tucker tensor. For each of these data models, we provide an upper bound on
the rank when an arbitrary low-rank completion is given. We characterize these
bounds both deterministically, i.e., with probability one given that the
sampling pattern satisfies certain combinatorial properties, and
probabilistically, i.e., with high probability given that the sampling
probability is above some threshold. Moreover, for both single-view matrix and
CP tensor, we are able to show that the obtained upper bound is exactly equal
to the unknown rank if the lowest-rank completion is given. Furthermore, we
provide numerical experiments for the case of single-view matrix, where we use
nuclear norm minimization to find a low-rank completion of the sampled data and
we observe that in most of the cases the proposed upper bound on the rank is
equal to the true rank
Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm
In this paper, we investigate tensor recovery problems within the tensor
singular value decomposition (t-SVD) framework. We propose the partial sum of
the tubal nuclear norm (PSTNN) of a tensor. The PSTNN is a surrogate of the
tensor tubal multi-rank. We build two PSTNN-based minimization models for two
typical tensor recovery problems, i.e., the tensor completion and the tensor
principal component analysis. We give two algorithms based on the alternating
direction method of multipliers (ADMM) to solve proposed PSTNN-based tensor
recovery models. Experimental results on the synthetic data and real-world data
reveal the superior of the proposed PSTNN
Exact tensor completion using t-SVD
In this paper we focus on the problem of completion of multidimensional
arrays (also referred to as tensors) from limited sampling. Our approach is
based on a recently proposed tensor-Singular Value Decomposition (t-SVD) [1].
Using this factorization one can derive notion of tensor rank, referred to as
the tensor tubal rank, which has optimality properties similar to that of
matrix rank derived from SVD. As shown in [2] some multidimensional data, such
as panning video sequences exhibit low tensor tubal rank and we look at the
problem of completing such data under random sampling of the data cube. We show
that by solving a convex optimization problem, which minimizes the tensor
nuclear norm obtained as the convex relaxation of tensor tubal rank, one can
guarantee recovery with overwhelming probability as long as samples in
proportion to the degrees of freedom in t-SVD are observed. In this sense our
results are order-wise optimal. The conditions under which this result holds
are very similar to the incoherency conditions for the matrix completion,
albeit we define incoherency under the algebraic set-up of t-SVD. We show the
performance of the algorithm on some real data sets and compare it with other
existing approaches based on tensor flattening and Tucker decomposition.Comment: 16 pages, 5 figures, 2 table
Tensor Completion Algorithms in Big Data Analytics
Tensor completion is a problem of filling the missing or unobserved entries
of partially observed tensors. Due to the multidimensional character of tensors
in describing complex datasets, tensor completion algorithms and their
applications have received wide attention and achievement in areas like data
mining, computer vision, signal processing, and neuroscience. In this survey,
we provide a modern overview of recent advances in tensor completion algorithms
from the perspective of big data analytics characterized by diverse variety,
large volume, and high velocity. We characterize these advances from four
perspectives: general tensor completion algorithms, tensor completion with
auxiliary information (variety), scalable tensor completion algorithms
(volume), and dynamic tensor completion algorithms (velocity). Further, we
identify several tensor completion applications on real-world data-driven
problems and present some common experimental frameworks popularized in the
literature. Our goal is to summarize these popular methods and introduce them
to researchers and practitioners for promoting future research and
applications. We conclude with a discussion of key challenges and promising
research directions in this community for future exploration
Variational Bayesian Inference for Robust Streaming Tensor Factorization and Completion
Streaming tensor factorization is a powerful tool for processing high-volume
and multi-way temporal data in Internet networks, recommender systems and
image/video data analysis. Existing streaming tensor factorization algorithms
rely on least-squares data fitting and they do not possess a mechanism for
tensor rank determination. This leaves them susceptible to outliers and
vulnerable to over-fitting. This paper presents a Bayesian robust streaming
tensor factorization model to identify sparse outliers, automatically determine
the underlying tensor rank and accurately fit low-rank structure. We implement
our model in Matlab and compare it with existing algorithms on tensor datasets
generated from dynamic MRI and Internet traffic.Comment: ICDM 2018. arXiv admin note: substantial text overlap with
arXiv:1809.0126
On Deterministic Sampling Patterns for Robust Low-Rank Matrix Completion
In this letter, we study the deterministic sampling patterns for the
completion of low rank matrix, when corrupted with a sparse noise, also known
as robust matrix completion. We extend the recent results on the deterministic
sampling patterns in the absence of noise based on the geometric analysis on
the Grassmannian manifold. A special case where each column has a certain
number of noisy entries is considered, where our probabilistic analysis
performs very efficiently. Furthermore, assuming that the rank of the original
matrix is not given, we provide an analysis to determine if the rank of a valid
completion is indeed the actual rank of the data corrupted with sparse noise by
verifying some conditions.Comment: Accepted to IEEE Signal Processing Letter
Scaled Nuclear Norm Minimization for Low-Rank Tensor Completion
Minimizing the nuclear norm of a matrix has been shown to be very efficient
in reconstructing a low-rank sampled matrix. Furthermore, minimizing the sum of
nuclear norms of matricizations of a tensor has been shown to be very efficient
in recovering a low-Tucker-rank sampled tensor. In this paper, we propose to
recover a low-TT-rank sampled tensor by minimizing a weighted sum of nuclear
norms of unfoldings of the tensor. We provide numerical results to show that
our proposed method requires significantly less number of samples to recover to
the original tensor in comparison with simply minimizing the sum of nuclear
norms since the structure of the unfoldings in the TT tensor model is
fundamentally different from that of matricizations in the Tucker tensor model
Learning from Binary Multiway Data: Probabilistic Tensor Decomposition and its Statistical Optimality
We consider the problem of decomposing a higher-order tensor with binary
entries. Such data problems arise frequently in applications such as
neuroimaging, recommendation system, topic modeling, and sensor network
localization. We propose a multilinear Bernoulli model, develop a
rank-constrained likelihood-based estimation method, and obtain the theoretical
accuracy guarantees. In contrast to continuous-valued problems, the binary
tensor problem exhibits an interesting phase transition phenomenon according to
the signal-to-noise ratio. The error bound for the parameter tensor estimation
is established, and we show that the obtained rate is minimax optimal under the
considered model. Furthermore, we develop an alternating optimization algorithm
with convergence guarantees. The efficacy of our approach is demonstrated
through both simulations and analyses of multiple data sets on the tasks of
tensor completion and clustering.Comment: 35 pages, 7 figures, 4 table
Provable Tensor Factorization with Missing Data
We study the problem of low-rank tensor factorization in the presence of
missing data. We ask the following question: how many sampled entries do we
need, to efficiently and exactly reconstruct a tensor with a low-rank
orthogonal decomposition? We propose a novel alternating minimization based
method which iteratively refines estimates of the singular vectors. We show
that under certain standard assumptions, our method can recover a three-mode
dimensional rank- tensor exactly from randomly sampled entries. In the process of proving this result, we
solve two challenging sub-problems for tensors with missing data. First, in the
process of analyzing the initialization step, we prove a generalization of a
celebrated result by Szemer\'edie et al. on the spectrum of random graphs.
Next, we prove global convergence of alternating minimization with a good
initialization. Simulations suggest that the dependence of the sample size on
dimensionality is indeed tight.Comment: 26 pages 2 figure
- …