9,903 research outputs found

    Tensor Principal Component Analysis via Convex Optimization

    Full text link
    This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is super-symmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a low-rank solution; (2) relaxing the rank-one constraint by Semidefinite Programming. Interestingly, our experiments show that both methods yield a rank-one solution with high probability, thereby solving the original tensor PCA problem to optimality with high probability. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well

    Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

    Full text link
    This paper studies the Tensor Robust Principal Component (TRPCA) problem which extends the known Robust PCA (Candes et al. 2011) to the tensor case. Our model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and Martin 2011) and its induced tensor tubal rank and tensor nuclear norm. Consider that we have a 3-way tensor X∈Rn1Γ—n2Γ—n3{\mathcal{X}}\in\mathbb{R}^{n_1\times n_2\times n_3} such that X=L0+E0{\mathcal{X}}={\mathcal{L}}_0+{\mathcal{E}}_0, where L0{\mathcal{L}}_0 has low tubal rank and E0{\mathcal{E}}_0 is sparse. Is that possible to recover both components? In this work, we prove that under certain suitable assumptions, we can recover both the low-rank and the sparse components exactly by simply solving a convex program whose objective is a weighted combination of the tensor nuclear norm and the β„“1\ell_1-norm, i.e., $\min_{{\mathcal{L}},\ {\mathcal{E}}} \ \|{{\mathcal{L}}}\|_*+\lambda\|{{\mathcal{E}}}\|_1, \ \text{s.t.} \ {\mathcal{X}}={\mathcal{L}}+{\mathcal{E}},where, where \lambda= {1}/{\sqrt{\max(n_1,n_2)n_3}}.Interestingly,TRPCAinvolvesRPCAasaspecialcasewhen. Interestingly, TRPCA involves RPCA as a special case when n_3=1$ and thus it is a simple and elegant tensor extension of RPCA. Also numerical experiments verify our theory and the application for the image denoising demonstrates the effectiveness of our method.Comment: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR, 2016

    Non-convex Penalty for Tensor Completion and Robust PCA

    Full text link
    In this paper, we propose a novel non-convex tensor rank surrogate function and a novel non-convex sparsity measure for tensor. The basic idea is to sidestep the bias of β„“1βˆ’\ell_1-norm by introducing concavity. Furthermore, we employ the proposed non-convex penalties in tensor recovery problems such as tensor completion and tensor robust principal component analysis, which has various real applications such as image inpainting and denoising. Due to the concavity, the models are difficult to solve. To tackle this problem, we devise majorization minimization algorithms, which optimize upper bounds of original functions in each iteration, and every sub-problem is solved by alternating direction multiplier method. Finally, experimental results on natural images and hyperspectral images demonstrate the effectiveness and efficiency of the proposed methods

    Robust Low-rank Tensor Recovery: Models and Algorithms

    Full text link
    Robust tensor recovery plays an instrumental role in robustifying tensor decompositions for multilinear data analysis against outliers, gross corruptions and missing values and has a diverse array of applications. In this paper, we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust Principal Component Analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery.Comment: appearing in SIAM Journal on Matrix Analysis and Application

    Regularized Tensor Factorizations and Higher-Order Principal Components Analysis

    Full text link
    High-dimensional tensors or multi-way data are becoming prevalent in areas such as biomedical imaging, chemometrics, networking and bibliometrics. Traditional approaches to finding lower dimensional representations of tensor data include flattening the data and applying matrix factorizations such as principal components analysis (PCA) or employing tensor decompositions such as the CANDECOMP / PARAFAC (CP) and Tucker decompositions. The former can lose important structure in the data, while the latter Higher-Order PCA (HOPCA) methods can be problematic in high-dimensions with many irrelevant features. We introduce frameworks for sparse tensor factorizations or Sparse HOPCA based on heuristic algorithmic approaches and by solving penalized optimization problems related to the CP decomposition. Extensions of these approaches lead to methods for general regularized tensor factorizations, multi-way Functional HOPCA and generalizations of HOPCA for structured data. We illustrate the utility of our methods for dimension reduction, feature selection, and signal recovery on simulated data and multi-dimensional microarrays and functional MRIs

    Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm

    Full text link
    In this paper, we investigate tensor recovery problems within the tensor singular value decomposition (t-SVD) framework. We propose the partial sum of the tubal nuclear norm (PSTNN) of a tensor. The PSTNN is a surrogate of the tensor tubal multi-rank. We build two PSTNN-based minimization models for two typical tensor recovery problems, i.e., the tensor completion and the tensor principal component analysis. We give two algorithms based on the alternating direction method of multipliers (ADMM) to solve proposed PSTNN-based tensor recovery models. Experimental results on the synthetic data and real-world data reveal the superior of the proposed PSTNN

    Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space

    Full text link
    Let us consider a case where all of the elements in some continuous slices are missing in tensor data. In this case, the nuclear-norm and total variation regularization methods usually fail to recover the missing elements. The key problem is capturing some delay/shift-invariant structure. In this study, we consider a low-rank model in an embedded space of a tensor. For this purpose, we extend a delay embedding for a time series to a "multi-way delay-embedding transform" for a tensor, which takes a given incomplete tensor as the input and outputs a higher-order incomplete Hankel tensor. The higher-order tensor is then recovered by Tucker-based low-rank tensor factorization. Finally, an estimated tensor can be obtained by using the inverse multi-way delay embedding transform of the recovered higher-order tensor. Our experiments showed that the proposed method successfully recovered missing slices for some color images and functional magnetic resonance images.Comment: accepted for CVPR201

    Dual Principal Component Pursuit

    Full text link
    We consider the problem of learning a linear subspace from data corrupted by outliers. Classical approaches are typically designed for the case in which the subspace dimension is small relative to the ambient dimension. Our approach works with a dual representation of the subspace and hence aims to find its orthogonal complement; as such, it is particularly suitable for subspaces whose dimension is close to the ambient dimension (subspaces of high relative dimension). We pose the problem of computing normal vectors to the inlier subspace as a non-convex β„“1\ell_1 minimization problem on the sphere, which we call Dual Principal Component Pursuit (DPCP) problem. We provide theoretical guarantees under which every global solution to DPCP is a vector in the orthogonal complement of the inlier subspace. Moreover, we relax the non-convex DPCP problem to a recursion of linear programs whose solutions are shown to converge in a finite number of steps to a vector orthogonal to the subspace. In particular, when the inlier subspace is a hyperplane, the solutions to the recursion of linear programs converge to the global minimum of the non-convex DPCP problem in a finite number of steps. We also propose algorithms based on alternating minimization and iteratively re-weighted least squares, which are suitable for dealing with large-scale data. Experiments on synthetic data show that the proposed methods are able to handle more outliers and higher relative dimensions than current state-of-the-art methods, while experiments in the context of the three-view geometry problem in computer vision suggest that the proposed methods can be a useful or even superior alternative to traditional RANSAC-based approaches for computer vision and other applications.Comment: fixed two typos in section 7.

    Primal-Dual Optimization Algorithms over Riemannian Manifolds: an Iteration Complexity Analysis

    Full text link
    In this paper we study nonconvex and nonsmooth multi-block optimization over Riemannian manifolds with coupled linear constraints. Such optimization problems naturally arise from machine learning, statistical learning, compressive sensing, image processing, and tensor PCA, among others. We develop an ADMM-like primal-dual approach based on decoupled solvable subroutines such as linearized proximal mappings. First, we introduce the optimality conditions for the afore-mentioned optimization models. Then, the notion of Ο΅\epsilon-stationary solutions is introduced as a result. The main part of the paper is to show that the proposed algorithms enjoy an iteration complexity of O(1/Ο΅2)O(1/\epsilon^2) to reach an Ο΅\epsilon-stationary solution. For prohibitively large-size tensor or machine learning models, we present a sampling-based stochastic algorithm with the same iteration complexity bound in expectation. In case the subproblems are not analytically solvable, a feasible curvilinear line-search variant of the algorithm based on retraction operators is proposed. Finally, we show specifically how the algorithms can be implemented to solve a variety of practical problems such as the NP-hard maximum bisection problem, the β„“q\ell_q regularized sparse tensor principal component analysis and the community detection problem. Our preliminary numerical results show great potentials of the proposed methods

    Non-Greedy L21-Norm Maximization for Principal Component Analysis

    Full text link
    Principal Component Analysis (PCA) is one of the most important unsupervised methods to handle high-dimensional data. However, due to the high computational complexity of its eigen decomposition solution, it hard to apply PCA to the large-scale data with high dimensionality. Meanwhile, the squared L2-norm based objective makes it sensitive to data outliers. In recent research, the L1-norm maximization based PCA method was proposed for efficient computation and being robust to outliers. However, this work used a greedy strategy to solve the eigen vectors. Moreover, the L1-norm maximization based objective may not be the correct robust PCA formulation, because it loses the theoretical connection to the minimization of data reconstruction error, which is one of the most important intuitions and goals of PCA. In this paper, we propose to maximize the L21-norm based robust PCA objective, which is theoretically connected to the minimization of reconstruction error. More importantly, we propose the efficient non-greedy optimization algorithms to solve our objective and the more general L21-norm maximization problem with theoretically guaranteed convergence. Experimental results on real world data sets show the effectiveness of the proposed method for principal component analysis
    • …
    corecore