10,973 research outputs found

    Kernelized Low Rank Representation on Grassmann Manifolds

    Full text link
    Low rank representation (LRR) has recently attracted great interest due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. One of its successful applications is subspace clustering which means data are clustered according to the subspaces they belong to. In this paper, at a higher level, we intend to cluster subspaces into classes of subspaces. This is naturally described as a clustering problem on Grassmann manifold. The novelty of this paper is to generalize LRR on Euclidean space onto an LRR model on Grassmann manifold in a uniform kernelized framework. The new methods have many applications in computer vision tasks. Several clustering experiments are conducted on handwritten digit images, dynamic textures, human face clips and traffic scene sequences. The experimental results show that the proposed methods outperform a number of state-of-the-art subspace clustering methods.Comment: 13 page

    Compressive Principal Component Pursuit

    Full text link
    We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured high-dimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals.Comment: 30 pages, 1 figure, preliminary version submitted to ISIT'1

    Kernelized LRR on Grassmann Manifolds for Subspace Clustering

    Full text link
    Low rank representation (LRR) has recently attracted great interest due to its pleasing efficacy in exploring low-dimensional sub- space structures embedded in data. One of its successful applications is subspace clustering, by which data are clustered according to the subspaces they belong to. In this paper, at a higher level, we intend to cluster subspaces into classes of subspaces. This is naturally described as a clustering problem on Grassmann manifold. The novelty of this paper is to generalize LRR on Euclidean space onto an LRR model on Grassmann manifold in a uniform kernelized LRR framework. The new method has many applications in data analysis in computer vision tasks. The proposed models have been evaluated on a number of practical data analysis applications. The experimental results show that the proposed models outperform a number of state-of-the-art subspace clustering methods

    Visual Tracking via Dynamic Graph Learning

    Full text link
    Existing visual tracking methods usually localize a target object with a bounding box, in which the performance of the foreground object trackers or detectors is often affected by the inclusion of background clutter. To handle this problem, we learn a patch-based graph representation for visual tracking. The tracked object is modeled by with a graph by taking a set of non-overlapping image patches as nodes, in which the weight of each node indicates how likely it belongs to the foreground and edges are weighted for indicating the appearance compatibility of two neighboring nodes. This graph is dynamically learned and applied in object tracking and model updating. During the tracking process, the proposed algorithm performs three main steps in each frame. First, the graph is initialized by assigning binary weights of some image patches to indicate the object and background patches according to the predicted bounding box. Second, the graph is optimized to refine the patch weights by using a novel alternating direction method of multipliers. Third, the object feature representation is updated by imposing the weights of patches on the extracted image features. The object location is predicted by maximizing the classification score in the structured support vector machine. Extensive experiments show that the proposed tracking algorithm performs well against the state-of-the-art methods on large-scale benchmark datasets.Comment: Submitted to TPAMI 201

    Value function approximation via low-rank models

    Full text link
    We propose a novel value function approximation technique for Markov decision processes. We consider the problem of compactly representing the state-action value function using a low-rank and sparse matrix model. The problem is to decompose a matrix that encodes the true value function into low-rank and sparse components, and we achieve this using Robust Principal Component Analysis (PCA). Under minimal assumptions, this Robust PCA problem can be solved exactly via the Principal Component Pursuit convex optimization problem. We experiment the procedure on several examples and demonstrate that our method yields approximations essentially identical to the true function.Comment: arXiv admin note: substantial text overlap with arXiv:0912.3599 by other author

    Alternating proximal gradient method for sparse nonnegative Tucker decomposition

    Full text link
    Multi-way data arises in many applications such as electroencephalography (EEG) classification, face recognition, text mining and hyperspectral data analysis. Tensor decomposition has been commonly used to find the hidden factors and elicit the intrinsic structures of the multi-way data. This paper considers sparse nonnegative Tucker decomposition (NTD), which is to decompose a given tensor into the product of a core tensor and several factor matrices with sparsity and nonnegativity constraints. An alternating proximal gradient method (APG) is applied to solve the problem. The algorithm is then modified to sparse NTD with missing values. Per-iteration cost of the algorithm is estimated scalable about the data size, and global convergence is established under fairly loose conditions. Numerical experiments on both synthetic and real world data demonstrate its superiority over a few state-of-the-art methods for (sparse) NTD from partial and/or full observations. The MATLAB code along with demos are accessible from the author's homepage

    Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset

    Full text link
    Recent research on problem formulations based on decomposition into low-rank plus sparse matrices shows a suitable framework to separate moving objects from the background. The most representative problem formulation is the Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit (PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix. However, similar robust implicit or explicit decompositions can be made in the following problem formulations: Robust Non-negative Matrix Factorization (RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal of these similar problem formulations is to obtain explicitly or implicitly a decomposition into low-rank matrix plus additive matrices. In this context, this work aims to initiate a rigorous and comprehensive review of the similar problem formulations in robust subspace learning and tracking based on decomposition into low-rank plus additive matrices for testing and ranking existing algorithms for background/foreground separation. For this, we first provide a preliminary review of the recent developments in the different problem formulations which allows us to define a unified view that we called Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine carefully each method in each robust subspace learning/tracking frameworks with their decomposition, their loss functions, their optimization problem and their solvers. Furthermore, we investigate if incremental algorithms and real-time implementations can be achieved for background/foreground separation. Finally, experimental results on a large-scale dataset called Background Models Challenge (BMC 2012) show the comparative performance of 32 different robust subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297, arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805, arXiv:1403.8067 by other authors, Computer Science Review, November 201

    Sparse Model Uncertainties in Compressed Sensing with Application to Convolutions and Sporadic Communication

    Full text link
    The success of the compressed sensing paradigm has shown that a substantial reduction in sampling and storage complexity can be achieved in certain linear and non-adaptive estimation problems. It is therefore an advisable strategy for noncoherent information retrieval in, for example, sporadic blind and semi-blind communication and sampling problems. But, the conventional model is not practical here since the compressible signals have to be estimated from samples taken solely on the output of an un-calibrated system which is unknown during measurement but often compressible. Conventionally, one has either to operate at suboptimal sampling rates or the recovery performance substantially suffers from the dominance of model mismatch. In this work we discuss such type of estimation problems and we focus on bilinear inverse problems. We link this problem to the recovery of low-rank and sparse matrices and establish stable low-dimensional embeddings of the uncalibrated receive signals whereby addressing also efficient communication-oriented methods like universal random demodulation. Exemplary, we investigate in more detail sparse convolutions serving as a basic communication channel model. In using some recent results from additive combinatorics we show that such type of signals can be efficiently low-rate sampled by semi-blind methods. Finally, we present a further application of these results in the field of phase retrieval from intensity Fourier measurements.Comment: Book chapter, submitted to "Compressed Sensing and its Applications", 31 pages, revised versio

    Partial Sum Minimization of Singular Values Representation on Grassmann Manifolds

    Full text link
    As a significant subspace clustering method, low rank representation (LRR) has attracted great attention in recent years. To further improve the performance of LRR and extend its applications, there are several issues to be resolved. The nuclear norm in LRR does not sufficiently use the prior knowledge of the rank which is known in many practical problems. The LRR is designed for vectorial data from linear spaces, thus not suitable for high dimensional data with intrinsic non-linear manifold structure. This paper proposes an extended LRR model for manifold-valued Grassmann data which incorporates prior knowledge by minimizing partial sum of singular values instead of the nuclear norm, namely Partial Sum minimization of Singular Values Representation (GPSSVR). The new model not only enforces the global structure of data in low rank, but also retains important information by minimizing only smaller singular values. To further maintain the local structures among Grassmann points, we also integrate the Laplacian penalty with GPSSVR. An effective algorithm is proposed to solve the optimization problem based on the GPSSVR model. The proposed model and algorithms are assessed on some widely used human action video datasets and a real scenery dataset. The experimental results show that the proposed methods obviously outperform other state-of-the-art methods.Comment: Submitting to ACM Transactions on Knowledge Discovery from Data with minor revisio

    Structured Low-Rank Matrix Factorization with Missing and Grossly Corrupted Observations

    Full text link
    Recovering low-rank and sparse matrices from incomplete or corrupted observations is an important problem in machine learning, statistics, bioinformatics, computer vision, as well as signal and image processing. In theory, this problem can be solved by the natural convex joint/mixed relaxations (i.e., l_{1}-norm and trace norm) under certain conditions. However, all current provable algorithms suffer from superlinear per-iteration cost, which severely limits their applicability to large-scale problems. In this paper, we propose a scalable, provable structured low-rank matrix factorization method to recover low-rank and sparse matrices from missing and grossly corrupted data, i.e., robust matrix completion (RMC) problems, or incomplete and grossly corrupted measurements, i.e., compressive principal component pursuit (CPCP) problems. Specifically, we first present two small-scale matrix trace norm regularized bilinear structured factorization models for RMC and CPCP problems, in which repetitively calculating SVD of a large-scale matrix is replaced by updating two much smaller factor matrices. Then, we apply the alternating direction method of multipliers (ADMM) to efficiently solve the RMC problems. Finally, we provide the convergence analysis of our algorithm, and extend it to address general CPCP problems. Experimental results verified both the efficiency and effectiveness of our method compared with the state-of-the-art methods.Comment: 28 pages, 9 figure
    corecore