9,300 research outputs found

    Simultaneous Tensor Completion and Denoising by Noise Inequality Constrained Convex Optimization

    Full text link
    Tensor completion is a technique of filling missing elements of the incomplete data tensors. It being actively studied based on the convex optimization scheme such as nuclear-norm minimization. When given data tensors include some noises, the nuclear-norm minimization problem is usually converted to the nuclear-norm `regularization' problem which simultaneously minimize penalty and error terms with some trade-off parameter. However, the good value of trade-off is not easily determined because of the difference of two units and the data dependence. In the sense of trade-off tuning, the noisy tensor completion problem with the `noise inequality constraint' is better choice than the `regularization' because the good noise threshold can be easily bounded with noise standard deviation. In this study, we tackle to solve the convex tensor completion problems with two types of noise inequality constraints: Gaussian and Laplace distributions. The contributions of this study are follows: (1) New tensor completion and denoising models using tensor total variation and nuclear-norm are proposed which can be characterized as a generalization/extension of many past matrix and tensor completion models, (2) proximal mappings for noise inequalities are derived which are analytically computable with low computational complexity, (3) convex optimization algorithm is proposed based on primal-dual splitting framework, (4) new step-size adaptation method is proposed to accelerate the optimization, and (5) extensive experiments demonstrated the advantages of the proposed method for visual data retrieval such as for color images, movies, and 3D-volumetric data

    Robust Low-Rank Tensor Ring Completion

    Full text link
    Low-rank tensor completion recovers missing entries based on different tensor decompositions. Due to its outstanding performance in exploiting some higher-order data structure, low rank tensor ring has been applied in tensor completion. To further deal with its sensitivity to sparse component as it does in tensor principle component analysis, we propose robust tensor ring completion (RTRC), which separates latent low-rank tensor component from sparse component with limited number of measurements. The low rank tensor component is constrained by the weighted sum of nuclear norms of its balanced unfoldings, while the sparse component is regularized by its l1 norm. We analyze the RTRC model and gives the exact recovery guarantee. The alternating direction method of multipliers is used to divide the problem into several sub-problems with fast solutions. In numerical experiments, we verify the recovery condition of the proposed method on synthetic data, and show the proposed method outperforms the state-of-the-art ones in terms of both accuracy and computational complexity in a number of real-world data based tasks, i.e., light-field image recovery, shadow removal in face images, and background extraction in color video

    Noisy Tensor Completion via the Sum-of-Squares Hierarchy

    Full text link
    In the noisy tensor completion problem we observe mm entries (whose location is chosen uniformly at random) from an unknown n1×n2×n3n_1 \times n_2 \times n_3 tensor TT. We assume that TT is entry-wise close to being rank rr. Our goal is to fill in its missing entries using as few observations as possible. Let n=max(n1,n2,n3)n = \max(n_1, n_2, n_3). We show that if m=n3/2rm = n^{3/2} r then there is a polynomial time algorithm based on the sixth level of the sum-of-squares hierarchy for completing it. Our estimate agrees with almost all of TT's entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case when r>nr > n, and in fact it works all the way up to r=n3/2ϵr = n^{3/2-\epsilon}. Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constant satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the sum-of-squares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the sum-of-squares hierarchy?Comment: 24 page

    Frequency-Weighted Robust Tensor Principal Component Analysis

    Full text link
    Robust tensor principal component analysis (RTPCA) can separate the low-rank component and sparse component from multidimensional data, which has been used successfully in several image applications. Its performance varies with different kinds of tensor decompositions, and the tensor singular value decomposition (t-SVD) is a popularly selected one. The standard t-SVD takes the discrete Fourier transform to exploit the residual in the 3rd mode in the decomposition. When minimizing the tensor nuclear norm related to t-SVD, all the frontal slices in frequency domain are optimized equally. In this paper, we incorporate frequency component analysis into t-SVD to enhance the RTPCA performance. Specially, different frequency bands are unequally weighted with respect to the corresponding physical meanings, and the frequency-weighted tensor nuclear norm can be obtained. Accordingly we rigorously deduce the frequency-weighted tensor singular value threshold operator, and apply it for low rank approximation subproblem in RTPCA. The newly obtained frequency-weighted RTPCA can be solved by alternating direction method of multipliers, and it is the first time that frequency analysis is taken in tensor principal component analysis. Numerical experiments on synthetic 3D data, color image denoising and background modeling verify that the proposed method outperforms the state-of-the-art algorithms both in accuracy and computational complexity

    Tensor-Ring Nuclear Norm Minimization and Application for Visual Data Completion

    Full text link
    Tensor ring (TR) decomposition has been successfully used to obtain the state-of-the-art performance in the visual data completion problem. However, the existing TR-based completion methods are severely non-convex and computationally demanding. In addition, the determination of the optimal TR rank is a tough work in practice. To overcome these drawbacks, we first introduce a class of new tensor nuclear norms by using tensor circular unfolding. Then we theoretically establish connection between the rank of the circularly-unfolded matrices and the TR ranks. We also develop an efficient tensor completion algorithm by minimizing the proposed tensor nuclear norm. Extensive experimental results demonstrate that our proposed tensor completion method outperforms the conventional tensor completion methods in the image/video in-painting problem with striped missing values.Comment: This paper has been accepted by ICASSP 201

    An Iterative Reweighted Method for Tucker Decomposition of Incomplete Multiway Tensors

    Full text link
    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms

    Tensor Ring Decomposition with Rank Minimization on Latent Space: An Efficient Approach for Tensor Completion

    Full text link
    In tensor completion tasks, the traditional low-rank tensor decomposition models suffer from the laborious model selection problem due to their high model sensitivity. In particular, for tensor ring (TR) decomposition, the number of model possibilities grows exponentially with the tensor order, which makes it rather challenging to find the optimal TR decomposition. In this paper, by exploiting the low-rank structure of the TR latent space, we propose a novel tensor completion method which is robust to model selection. In contrast to imposing the low-rank constraint on the data space, we introduce nuclear norm regularization on the latent TR factors, resulting in the optimization step using singular value decomposition (SVD) being performed at a much smaller scale. By leveraging the alternating direction method of multipliers (ADMM) scheme, the latent TR factors with optimal rank and the recovered tensor can be obtained simultaneously. Our proposed algorithm is shown to effectively alleviate the burden of TR-rank selection, thereby greatly reducing the computational cost. The extensive experimental results on both synthetic and real-world data demonstrate the superior performance and efficiency of the proposed approach against the state-of-the-art algorithms

    Near-optimal sample complexity for convex tensor completion

    Full text link
    We analyze low rank tensor completion (TC) using noisy measurements of a subset of the tensor. Assuming a rank-rr, order-dd, N×N××NN \times N \times \cdots \times N tensor where r=O(1)r=O(1), the best sampling complexity that was achieved is O(Nd2)O(N^{\frac{d}{2}}), which is obtained by solving a tensor nuclear-norm minimization problem. However, this bound is significantly larger than the number of free variables in a low rank tensor which is O(dN)O(dN). In this paper, we show that by using an atomic-norm whose atoms are rank-11 sign tensors, one can obtain a sample complexity of O(dN)O(dN). Moreover, we generalize the matrix max-norm definition to tensors, which results in a max-quasi-norm (max-qnorm) whose unit ball has small Rademacher complexity. We prove that solving a constrained least squares estimation using either the convex atomic-norm or the nonconvex max-qnorm results in optimal sample complexity for the problem of low-rank tensor completion. Furthermore, we show that these bounds are nearly minimax rate-optimal. We also provide promising numerical results for max-qnorm constrained tensor completion, showing improved recovery results compared to matricization and alternating least squares

    Optimal Low-Rank Tensor Recovery from Separable Measurements: Four Contractions Suffice

    Full text link
    Tensors play a central role in many modern machine learning and signal processing applications. In such applications, the target tensor is usually of low rank, i.e., can be expressed as a sum of a small number of rank one tensors. This motivates us to consider the problem of low rank tensor recovery from a class of linear measurements called separable measurements. As specific examples, we focus on two distinct types of separable measurement mechanisms (a) Random projections, where each measurement corresponds to an inner product of the tensor with a suitable random tensor, and (b) the completion problem where measurements constitute revelation of a random set of entries. We present a computationally efficient algorithm, with rigorous and order-optimal sample complexity results (upto logarithmic factors) for tensor recovery. Our method is based on reduction to matrix completion sub-problems and adaptation of Leurgans' method for tensor decomposition. We extend the methodology and sample complexity results to higher order tensors, and experimentally validate our theoretical results

    Enhanced nonconvex low-rank approximation of tensor multi-modes for tensor completion

    Full text link
    Higher-order low-rank tensor arises in many data processing applications and has attracted great interests. Inspired by low-rank approximation theory, researchers have proposed a series of effective tensor completion methods. However, most of these methods directly consider the global low-rankness of underlying tensors, which is not sufficient for a low sampling rate; in addition, the single nuclear norm or its relaxation is usually adopted to approximate the rank function, which would lead to suboptimal solution deviated from the original one. To alleviate the above problems, in this paper, we propose a novel low-rank approximation of tensor multi-modes (LRATM), in which a double nonconvex LγL_{\gamma} norm is designed to represent the underlying joint-manifold drawn from the modal factorization factors of the underlying tensor. A block successive upper-bound minimization method-based algorithm is designed to efficiently solve the proposed model, and it can be demonstrated that our numerical scheme converges to the coordinatewise minimizers. Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors with significantly fewer samples than the compared methods.Comment: arXiv admin note: substantial text overlap with arXiv:2004.0874
    corecore