4 research outputs found

    A New Low-Rank Tensor Model for Video Completion

    Full text link
    In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm or t-TNN for short. The twist tensor denotes a 3-way tensor representation to laterally store 2D data slices in order. On one hand, t-TNN convexly relaxes the tensor multi-rank of the twist tensor in the Fourier domain, which allows an efficient computation using FFT. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a non-stationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.Comment: 8 pages, 11 figures, 1 tabl

    Non-convex Penalty for Tensor Completion and Robust PCA

    Full text link
    In this paper, we propose a novel non-convex tensor rank surrogate function and a novel non-convex sparsity measure for tensor. The basic idea is to sidestep the bias of ℓ1−\ell_1-norm by introducing concavity. Furthermore, we employ the proposed non-convex penalties in tensor recovery problems such as tensor completion and tensor robust principal component analysis, which has various real applications such as image inpainting and denoising. Due to the concavity, the models are difficult to solve. To tackle this problem, we devise majorization minimization algorithms, which optimize upper bounds of original functions in each iteration, and every sub-problem is solved by alternating direction multiplier method. Finally, experimental results on natural images and hyperspectral images demonstrate the effectiveness and efficiency of the proposed methods

    Spatio-Temporal Tensor Sketching via Adaptive Sampling

    Full text link
    Mining massive spatio-temporal data can help a variety of real-world applications such as city capacity planning, event management, and social network analysis. The tensor representation can be used to capture the correlation between space and time and simultaneously exploit the latent structure of the spatial and temporal patterns in an unsupervised fashion. However, the increasing volume of spatio-temporal data has made it prohibitively expensive to store and analyze using tensor factorization. In this paper, we propose SkeTenSmooth, a novel tensor factorization framework that uses adaptive sampling to compress the tensor in a temporally streaming fashion and preserves the underlying global structure. SkeTenSmooth adaptively samples incoming tensor slices according to the detected data dynamics. Thus, the sketches are more representative and informative of the tensor dynamic patterns. In addition, we propose a robust tensor factorization method that can deal with the sketched tensor and recover the original patterns. Experiments on the New York City Yellow Taxi data show that SkeTenSmooth greatly reduces the memory cost and outperforms random sampling and fixed rate sampling method in terms of retaining the underlying patterns

    Learning tensors from partial binary measurements

    Full text link
    In this paper we generalize the 1-bit matrix completion problem to higher order tensors. We prove that when r=O(1)r=O(1) a bounded rank-rr, order-dd tensor TT in RN×RN×⋯×RN\mathbb{R}^{N} \times \mathbb{R}^{N} \times \cdots \times \mathbb{R}^{N} can be estimated efficiently by only m=O(Nd)m=O(Nd) binary measurements by regularizing its max-qnorm and M-norm as surrogates for its rank. We prove that similar to the matrix case, i.e., when d=2d=2, the sample complexity of recovering a low-rank tensor from 1-bit measurements of a subset of its entries is the same as recovering it from unquantized measurements. Moreover, we show the advantage of using 1-bit tensor completion over matricization both theoretically and numerically. Specifically, we show how the 1-bit measurement model can be used for context-aware recommender systems.Comment: 26 page
    corecore