7,997 research outputs found
Optimal Low-Rank Tensor Recovery from Separable Measurements: Four Contractions Suffice
Tensors play a central role in many modern machine learning and signal
processing applications. In such applications, the target tensor is usually of
low rank, i.e., can be expressed as a sum of a small number of rank one
tensors. This motivates us to consider the problem of low rank tensor recovery
from a class of linear measurements called separable measurements. As specific
examples, we focus on two distinct types of separable measurement mechanisms
(a) Random projections, where each measurement corresponds to an inner product
of the tensor with a suitable random tensor, and (b) the completion problem
where measurements constitute revelation of a random set of entries. We present
a computationally efficient algorithm, with rigorous and order-optimal sample
complexity results (upto logarithmic factors) for tensor recovery. Our method
is based on reduction to matrix completion sub-problems and adaptation of
Leurgans' method for tensor decomposition. We extend the methodology and sample
complexity results to higher order tensors, and experimentally validate our
theoretical results
Novel Factorization Strategies for Higher Order Tensors: Implications for Compression and Recovery of Multi-linear Data
In this paper we propose novel methods for compression and recovery of
multilinear data under limited sampling. We exploit the recently proposed
tensor- Singular Value Decomposition (t-SVD)[1], which is a group theoretic
framework for tensor decomposition. In contrast to popular existing tensor
decomposition techniques such as higher-order SVD (HOSVD), t-SVD has optimality
properties similar to the truncated SVD for matrices. Based on t-SVD, we first
construct novel tensor-rank like measures to characterize informational and
structural complexity of multilinear data. Following that we outline a
complexity penalized algorithm for tensor completion from missing entries. As
an application, 3-D and 4-D (color) video data compression and recovery are
considered. We show that videos with linear camera motion can be represented
more efficiently using t-SVD compared to traditional approaches based on
vectorizing or flattening of the tensors. Application of the proposed tensor
completion algorithm for video recovery from missing entries is shown to yield
a superior performance over existing methods. In conclusion we point out
several research directions and implications to online prediction of
multilinear data
Fast Randomized Algorithms for t-Product Based Tensor Operations and Decompositions with Applications to Imaging Data
Tensors of order three or higher have found applications in diverse fields,
including image and signal processing, data mining, biomedical engineering and
link analysis, to name a few. In many applications that involve for example
time series or other ordered data, the corresponding tensor has a
distinguishing orientation that exhibits a low tubal structure. This has
motivated the introduction of the tubal rank and the corresponding tubal
singular value decomposition in the literature. In this work, we develop
randomized algorithms for many common tensor operations, including tensor
low-rank approximation and decomposition, together with tensor multiplication.
The proposed tubal focused algorithms employ a small number of lateral and/or
horizontal slices of the underlying 3-rd order tensor, that come with {\em
relative error guarantees} for the quality of the obtained solutions. The
performance of the proposed algorithms is illustrated on diverse imaging
applications, including mass spectrometry data and image and video recovery
from incomplete and noisy data. The results show both good computational
speed-up vis-a-vis conventional completion algorithms and good accuracy.Comment: 31 pages, 6 figures, to appear in the SIAM Journal on Imaging
Science
Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization
Currently, low-rank tensor completion has gained cumulative attention in
recovering incomplete visual data whose partial elements are missing. By taking
a color image or video as a three-dimensional (3D) tensor, previous studies
have suggested several definitions of tensor nuclear norm. However, they have
limitations and may not properly approximate the real rank of a tensor.
Besides, they do not explicitly use the low-rank property in optimization. It
is proved that the recently proposed truncated nuclear norm (TNN) can replace
the traditional nuclear norm, as a better estimation to the rank of a matrix.
Thus, this paper presents a new method called the tensor truncated nuclear norm
(T-TNN), which proposes a new definition of tensor nuclear norm and extends the
truncated nuclear norm from the matrix case to the tensor case. Beneficial from
the low rankness of TNN, our approach improves the efficacy of tensor
completion. We exploit the previously proposed tensor singular value
decomposition and the alternating direction method of multipliers in
optimization. Extensive experiments on real-world videos and images demonstrate
that the performance of our approach is superior to those of existing methods.Comment: Accepted as a poster presentation at the 24th International
Conference on Pattern Recognition in 20-24 August 2018, Beijing, Chin
Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm
In this paper, we consider the Tensor Robust Principal Component Analysis
(TRPCA) problem, which aims to exactly recover the low-rank and sparse
components from their sum. Our model is based on the recently proposed
tensor-tensor product (or t-product). Induced by the t-product, we first
rigorously deduce the tensor spectral norm, tensor nuclear norm, and tensor
average rank, and show that the tensor nuclear norm is the convex envelope of
the tensor average rank within the unit ball of the tensor spectral norm. These
definitions, their relationships and properties are consistent with matrix
cases. Equipped with the new tensor nuclear norm, we then solve the TRPCA
problem by solving a convex program and provide the theoretical guarantee for
the exact recovery. Our TRPCA model and recovery guarantee include matrix RPCA
as a special case. Numerical experiments verify our results, and the
applications to image recovery and background modeling problems demonstrate the
effectiveness of our method.Comment: arXiv admin note: text overlap with arXiv:1708.0418
Rank regularization and Bayesian inference for tensor completion and extrapolation
A novel regularizer of the PARAFAC decomposition factors capturing the
tensor's rank is proposed in this paper, as the key enabler for completion of
three-way data arrays with missing entries. Set in a Bayesian framework, the
tensor completion method incorporates prior information to enhance its
smoothing and prediction capabilities. This probabilistic approach can
naturally accommodate general models for the data distribution, lending itself
to various fitting criteria that yield optimum estimates in the
maximum-a-posteriori sense. In particular, two algorithms are devised for
Gaussian- and Poisson-distributed data, that minimize the rank-regularized
least-squares error and Kullback-Leibler divergence, respectively. The proposed
technique is able to recover the "ground-truth'' tensor rank when tested on
synthetic data, and to complete brain imaging and yeast gene expression
datasets with 50% and 15% of missing entries respectively, resulting in
recovery errors at -10dB and -15dB.Comment: 12 pages, submitted to IEEE Transactions on Signal Processin
A Tensor Completion Approach for Efficient and Robust Fingerprint-based Indoor Localization
The localization technology is important for the development of indoor
location-based services (LBS). The radio frequency (RF) fingerprint-based
localization is one of the most promising approaches. However, it is
challenging to apply this localization to real-world environments since it is
time-consuming and labor-intensive to construct a fingerprint database as a
prior for localization. Another challenge is that the presence of anomaly
readings in the fingerprints reduces the localization accuracy. To address
these two challenges, we propose an efficient and robust indoor localization
approach. First, we model the fingerprint database as a 3-D tensor, which
represents the relationships between fingerprints, locations and indices of
access points. Second, we introduce a tensor decomposition model for robust
fingerprint data recovery, which decomposes a partial observation tensor as the
superposition of a low-rank tensor and a spare anomaly tensor. Third, we
exploit the alternating direction method of multipliers (ADMM) to solve the
convex optimization problem of tensor-nuclear-norm completion for the anomaly
case. Finally, we verify the proposed approach on a ground truth data set
collected in an office building with size 80m times 20m. Experiment results
show that to achieve a same error rate 4%, the sampling rate of our approach is
only 10%, while it is 60% for the state-of-the-art approach. Moreover, the
proposed approach leads to a more accurate localization (nearly 20%, 0.6m
improvement) over the compared approach.Comment: 6 pages, 5 figure
Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization
This paper studies the Tensor Robust Principal Component (TRPCA) problem
which extends the known Robust PCA (Candes et al. 2011) to the tensor case. Our
model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and
Martin 2011) and its induced tensor tubal rank and tensor nuclear norm.
Consider that we have a 3-way tensor such that ,
where has low tubal rank and is sparse. Is
that possible to recover both components? In this work, we prove that under
certain suitable assumptions, we can recover both the low-rank and the sparse
components exactly by simply solving a convex program whose objective is a
weighted combination of the tensor nuclear norm and the -norm, i.e.,
$\min_{{\mathcal{L}},\ {\mathcal{E}}} \
\|{{\mathcal{L}}}\|_*+\lambda\|{{\mathcal{E}}}\|_1, \ \text{s.t.} \
{\mathcal{X}}={\mathcal{L}}+{\mathcal{E}}\lambda=
{1}/{\sqrt{\max(n_1,n_2)n_3}}n_3=1$ and thus it is a simple and elegant tensor extension of RPCA.
Also numerical experiments verify our theory and the application for the image
denoising demonstrates the effectiveness of our method.Comment: IEEE International Conference on Computer Vision and Pattern
Recognition (CVPR, 2016
Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space
Let us consider a case where all of the elements in some continuous slices
are missing in tensor data.
In this case, the nuclear-norm and total variation regularization methods
usually fail to recover the missing elements.
The key problem is capturing some delay/shift-invariant structure.
In this study, we consider a low-rank model in an embedded space of a tensor.
For this purpose, we extend a delay embedding for a time series to a
"multi-way delay-embedding transform" for a tensor, which takes a given
incomplete tensor as the input and outputs a higher-order incomplete Hankel
tensor.
The higher-order tensor is then recovered by Tucker-based low-rank tensor
factorization.
Finally, an estimated tensor can be obtained by using the inverse multi-way
delay embedding transform of the recovered higher-order tensor.
Our experiments showed that the proposed method successfully recovered
missing slices for some color images and functional magnetic resonance images.Comment: accepted for CVPR201
Tensor-based formulation and nuclear norm regularization for multi-energy computed tomography
The development of energy selective, photon counting X-ray detectors allows
for a wide range of new possibilities in the area of computed tomographic image
formation. Under the assumption of perfect energy resolution, here we propose a
tensor-based iterative algorithm that simultaneously reconstructs the X-ray
attenuation distribution for each energy. We use a multi-linear image model
rather than a more standard "stacked vector" representation in order to develop
novel tensor-based regularizers. Specifically, we model the multi-spectral
unknown as a 3-way tensor where the first two dimensions are space and the
third dimension is energy. This approach allows for the design of tensor
nuclear norm regularizers, which like its two dimensional counterpart, is a
convex function of the multi-spectral unknown. The solution to the resulting
convex optimization problem is obtained using an alternating direction method
of multipliers (ADMM) approach. Simulation results shows that the generalized
tensor nuclear norm can be used as a stand alone regularization technique for
the energy selective (spectral) computed tomography (CT) problem and when
combined with total variation regularization it enhances the regularization
capabilities especially at low energy images where the effects of noise are
most prominent
- …