39 research outputs found

    Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

    Full text link
    This paper studies the Tensor Robust Principal Component (TRPCA) problem which extends the known Robust PCA (Candes et al. 2011) to the tensor case. Our model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and Martin 2011) and its induced tensor tubal rank and tensor nuclear norm. Consider that we have a 3-way tensor X∈Rn1×n2×n3{\mathcal{X}}\in\mathbb{R}^{n_1\times n_2\times n_3} such that X=L0+E0{\mathcal{X}}={\mathcal{L}}_0+{\mathcal{E}}_0, where L0{\mathcal{L}}_0 has low tubal rank and E0{\mathcal{E}}_0 is sparse. Is that possible to recover both components? In this work, we prove that under certain suitable assumptions, we can recover both the low-rank and the sparse components exactly by simply solving a convex program whose objective is a weighted combination of the tensor nuclear norm and the ℓ1\ell_1-norm, i.e., $\min_{{\mathcal{L}},\ {\mathcal{E}}} \ \|{{\mathcal{L}}}\|_*+\lambda\|{{\mathcal{E}}}\|_1, \ \text{s.t.} \ {\mathcal{X}}={\mathcal{L}}+{\mathcal{E}},where, where \lambda= {1}/{\sqrt{\max(n_1,n_2)n_3}}.Interestingly,TRPCAinvolvesRPCAasaspecialcasewhen. Interestingly, TRPCA involves RPCA as a special case when n_3=1$ and thus it is a simple and elegant tensor extension of RPCA. Also numerical experiments verify our theory and the application for the image denoising demonstrates the effectiveness of our method.Comment: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR, 2016

    Constrained low-tubal-rank tensor recovery for hyperspectral images mixed noise removal by bilateral random projections

    Full text link
    In this paper, we propose a novel low-tubal-rank tensor recovery model, which directly constrains the tubal rank prior for effectively removing the mixed Gaussian and sparse noise in hyperspectral images. The constraints of tubal-rank and sparsity can govern the solution of the denoised tensor in the recovery procedure. To solve the constrained low-tubal-rank model, we develop an iterative algorithm based on bilateral random projections to efficiently solve the proposed model. The advantage of random projections is that the approximation of the low-tubal-rank tensor can be obtained quite accurately in an inexpensive manner. Experimental examples for hyperspectral image denoising are presented to demonstrate the effectiveness and efficiency of the proposed method.Comment: Accepted by IGARSS 201

    Tensor Matched Subspace Detection

    Full text link
    The problem of testing whether a signal lies within a given subspace, also named matched subspace detection, has been well studied when the signal is represented as a vector. However, the matched subspace detection methods based on vectors can not be applied to the situations that signals are naturally represented as multi-dimensional data arrays or tensors. Considering that tensor subspaces and orthogonal projections onto these subspaces are well defined in the recently proposed transform-based tensor model, which motivates us to investigate the problem of matched subspace detection in high dimensional case. In this paper, we propose an approach for tensor matched subspace detection based on the transform-based tensor model with tubal-sampling and elementwise-sampling, respectively. First, we construct estimators based on tubal-sampling and elementwise-sampling to estimate the energy of a signal outside a given subspace of a third-order tensor and then give the probability bounds of our estimators, which show that our estimators work effectively when the sample size is greater than a constant. Secondly, the detectors both for noiseless data and noisy data are given, and the corresponding detection performance analyses are also provided. Finally, based on discrete Fourier transform (DFT) and discrete cosine transform (DCT), the performance of our estimators and detectors are evaluated by several simulations, and simulation results verify the effectiveness of our approach

    On Deterministic Sampling Patterns for Robust Low-Rank Matrix Completion

    Full text link
    In this letter, we study the deterministic sampling patterns for the completion of low rank matrix, when corrupted with a sparse noise, also known as robust matrix completion. We extend the recent results on the deterministic sampling patterns in the absence of noise based on the geometric analysis on the Grassmannian manifold. A special case where each column has a certain number of noisy entries is considered, where our probabilistic analysis performs very efficiently. Furthermore, assuming that the rank of the original matrix is not given, we provide an analysis to determine if the rank of a valid completion is indeed the actual rank of the data corrupted with sparse noise by verifying some conditions.Comment: Accepted to IEEE Signal Processing Letter

    A Fast Algorithm for Cosine Transform Based Tensor Singular Value Decomposition

    Full text link
    Recently, there has been a lot of research into tensor singular value decomposition (t-SVD) by using discrete Fourier transform (DFT) matrix. The main aims of this paper are to propose and study tensor singular value decomposition based on the discrete cosine transform (DCT) matrix. The advantages of using DCT are that (i) the complex arithmetic is not involved in the cosine transform based tensor singular value decomposition, so the computational cost required can be saved; (ii) the intrinsic reflexive boundary condition along the tubes in the third dimension of tensors is employed, so its performance would be better than that by using the periodic boundary condition in DFT. We demonstrate that the tensor product between two tensors by using DCT can be equivalent to the multiplication between a block Toeplitz-plus-Hankel matrix and a block vector. Numerical examples of low-rank tensor completion are further given to illustrate that the efficiency by using DCT is two times faster than that by using DFT and also the errors of video and multispectral image completion by using DCT are smaller than those by using DFT

    Tensor p-shrinkage nuclear norm for low-rank tensor completion

    Full text link
    In this paper, a new definition of tensor p-shrinkage nuclear norm (p-TNN) is proposed based on tensor singular value decomposition (t-SVD). In particular, it can be proved that p-TNN is a better approximation of the tensor average rank than the tensor nuclear norm when p < 1. Therefore, by employing the p-shrinkage nuclear norm, a novel low-rank tensor completion (LRTC) model is proposed to estimate a tensor from its partial observations. Statistically, the upper bound of recovery error is provided for the LRTC model. Furthermore, an efficient algorithm, accelerated by the adaptive momentum scheme, is developed to solve the resulting nonconvex optimization problem. It can be further guaranteed that the algorithm enjoys a global convergence rate under the smoothness assumption. Numerical experiments conducted on both synthetic and real-world data sets verify our results and demonstrate the superiority of our p-TNN in LRTC problems over several state-of-the-art methods

    Enhanced nonconvex low-rank approximation of tensor multi-modes for tensor completion

    Full text link
    Higher-order low-rank tensor arises in many data processing applications and has attracted great interests. Inspired by low-rank approximation theory, researchers have proposed a series of effective tensor completion methods. However, most of these methods directly consider the global low-rankness of underlying tensors, which is not sufficient for a low sampling rate; in addition, the single nuclear norm or its relaxation is usually adopted to approximate the rank function, which would lead to suboptimal solution deviated from the original one. To alleviate the above problems, in this paper, we propose a novel low-rank approximation of tensor multi-modes (LRATM), in which a double nonconvex LγL_{\gamma} norm is designed to represent the underlying joint-manifold drawn from the modal factorization factors of the underlying tensor. A block successive upper-bound minimization method-based algorithm is designed to efficiently solve the proposed model, and it can be demonstrated that our numerical scheme converges to the coordinatewise minimizers. Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors with significantly fewer samples than the compared methods.Comment: arXiv admin note: substantial text overlap with arXiv:2004.0874

    Bayesian Robust Tensor Ring Model for Incomplete Multiway Data

    Full text link
    Robust tensor completion (RTC) aims to recover a low-rank tensor from its incomplete observation with outlier corruption. The recently proposed tensor ring (TR) model has demonstrated superiority in solving the RTC problem. However, the existing methods either require a pre-assigned TR rank or aggressively pursue the minimum TR rank, thereby often leading to biased solutions in the presence of noise. In this paper, a Bayesian robust tensor ring decomposition (BRTR) method is proposed to give more accurate solutions to the RTC problem, which can avoid exquisite selection of the TR rank and penalty parameters. A variational Bayesian (VB) algorithm is developed to infer the probability distribution of posteriors. During the learning process, BRTR can prune off slices of core tensor with marginal components, resulting in automatic TR rank detection. Extensive experiments show that BRTR can achieve significantly improved performance than other state-of-the-art methods

    Tensor Low Rank Modeling and Its Applications in Signal Processing

    Full text link
    Modeling of multidimensional signal using tensor is more convincing than representing it as a collection of matrices. The tensor based approaches can explore the abundant spatial and temporal structures of the mutlidimensional signal. The backbone of this modeling is the mathematical foundations of tensor algebra. The linear transform based tensor algebra furnishes low complex and high performance algebraic structures suitable for the introspection of the multidimensional signal. A comprehensive introduction of the linear transform based tensor algebra is provided from the signal processing viewpoint. The rank of a multidimensional signal is a precious property which gives an insight into the structural aspects of it. All natural multidimensional signals can be approximated to a low rank signal without losing significant information. The low rank approximation is beneficial in many signal processing applications such as denoising, missing sample estimation, resolution enhancement, classification, background estimation, object detection, deweathering, clustering and much more applications. Detailed case study of the ways and means of the low rank modeling in the above said signal processing applications are also presented

    Optimal Low-Rank Tensor Recovery from Separable Measurements: Four Contractions Suffice

    Full text link
    Tensors play a central role in many modern machine learning and signal processing applications. In such applications, the target tensor is usually of low rank, i.e., can be expressed as a sum of a small number of rank one tensors. This motivates us to consider the problem of low rank tensor recovery from a class of linear measurements called separable measurements. As specific examples, we focus on two distinct types of separable measurement mechanisms (a) Random projections, where each measurement corresponds to an inner product of the tensor with a suitable random tensor, and (b) the completion problem where measurements constitute revelation of a random set of entries. We present a computationally efficient algorithm, with rigorous and order-optimal sample complexity results (upto logarithmic factors) for tensor recovery. Our method is based on reduction to matrix completion sub-problems and adaptation of Leurgans' method for tensor decomposition. We extend the methodology and sample complexity results to higher order tensors, and experimentally validate our theoretical results
    corecore